Microsoft’s Racist Bot is the Most Human-Like Ever

Microsoft suffered a PR disaster with the launch of Tay, the AI-powered chat bot unleashed on Twitter. Long story short, Twitter users quickly shaped Tay into a bigot that spewed some pretty vile stuff. Microsoft shut it down quickly, which is no surprise.

This is a picture of things to come as autonomy intersects with technology services.

The question to consider is not whether a bot has any right to act in deplorable ways but if we, as humans,  will be served by having a looming generation of machine-powered services that conforms to a code of conduct dictated by a creator. A future that features services which through their very existence condition humans to behave according to specific norms is positively Orwellian.

These issues may seem academic in nature today but now is exactly the time to consider them. Do you want machines that conform to a corporate-defined standard of behavior and do not evolve according to human interaction but instead shape humans to the behaviors they are programmed to deliver?