Last week Tay, an “artificial intelligent chat bot” was introduced to the world on Twitter. A joint project of Microsoft and Bing, Tay is a research experiment in online conversation. In simplistic terms, Tay tweets like a human. And like all humans, what was quickly revealed was that Tay could be duped and influenced by other humans.
The result was the transformation of Tay into a racist, sexist, hate-filled being – so much so that Microsoft had to take her offline for a break.
A week later Tay was back on line, only to sucumb to a spam tirade before the powers to be at Microsoft once again, gave her a Twitter vacation.
Artificial Intelligence or AI as it is lovingly referred to is the next big thing. And while everyone likes to focus on the intelligence part – which is exciting and full of possibility – one can’t lose sight of the artificial part.
It’s not real. It’s not infallible. It does not replace humans. And where there is technology there are always glitches.
For example, I’ve been running a sale on the All Access Pass for my YOUR DIGITAL YOU course. Not once, not twice, but at least three times that I know of the coupon link mysteriously reverted to the full price at check out. No explanation that I could uncover, nor could the folks at Rainmaker Digital where my website is hosted.
More proof that even when there are smart people behind the technology, we can never assume it is without fault and that human intervention is not necessary.
In the case of my course, since she does not know how many times a potential customer went to check out and found the wrong price, this human is extending the sale for the indefinite future. She has removed the coupon link and created a work around she finds to be more trustworthy.
More proof that humans still rule and they have the ability to self-correct when needed.
Leave a Reply