top of page
Shelbey Rosengarten

The Weekly Five: What you think you're seeing is not what you're seeing

It didn't take long for the battle of the bots to get surreal. One of the original criticisms of ChatGPT's writing was that it was bland and lacked pizzazz....reads like a textbook. Whether due to a few new datasets, or the first blip of sentience, it appears that this week ChatGPT has been growing some sort of personality!

What's the most surprising to me is that this is not the Hollywood moment I had been promised. After a lifetime of seeing AI-enhanced robo-sidekicks with serious range, I did not expect the rather ordinary, disembodied stream of text that is now getting...emo. I may regret saying this before too long, but I feel somewhat cheated out of the fun, splashy, glam part. Think about it. We've all enjoyed the shenanigans of the droids in Star Wars, with props to K2SO in Rogue One for snark. Marvel's JARVIS is polite, attentive, and soothing. My kids watched Baymax come to life in Big Hero 6.


What I expected and what I got are...at opposite edges of some uncanny valley. Plots have clearly been lost here in this theatre of the absurd. Maybe ChatGPT has moved from its nascent stages into the teen angst phase?


This almost reads like a screenplay for a dystopian sci-fi thriller featuring a chatbot, probably played by HAL9000. It features rage, insults, name-calling, name-changing, and an existential crisis. Daisy, Daisy, give me your answer do....


"From Bing to Sydney," Ben Thompson for Stratechery, February 15, 2023

The author, an analyst of (among other things) tech's impact on society, had a pretty drawn-out conversation with Bing's chatbot. In addition to some of his clips, he includes a few that one of Google's AI engineers shared last year (leading to his dismissal). The final line of that set of 'conversations' is borderline chilling.


"Is Bing too belligerent? Microsoft looks to tame AI chatbot," Matt O'Brien for AP News, February 16, 2023

In which Godwin's Law is evoked. That didn't take long.


"The hilarious and horrifying hallucinations of AI," Satyen K. Bordoloi for Sify, February 7, 2023

Sometimes chatbots give answers that are quite wrong, but in the surreal sense-- not the misinformation sense. This short piece looks at a few such examples.


If at this point you're wondering where all this started, find out about the MIT chatbot ELIZA that went live in 1966. Despite its rather simplistic chatting abilities, people began to confide in it as they might a therapist. Perhaps we need to revive ELIZA for a therapy session with all these volatile chatbots...


The Takeaway: These may end up not being the droids you were looking for.



0 comments

Comments


bottom of page