This is Not Me….and I Didn’t Say That!

These days I frequently think about the old adage that “life is a journey from certainty to doubt”. It has hit me particularly hard with a combination of different news items this week. We all know that you can’t believe all you read on line.  It is also clear that knowing who you are dealing with on-line can sometimes be confusing. One of the most famous jokes in the history of computing says that on the Internet “no one knows you’re a dog”. In the latest version, of course, things become more surreal, as on the Internet “no one knows if you’re a robot”. An invented artifice aimed at impersonating a human in order to provide targeted advice, to befriend, or much worse. While I get this concept and try to act accordingly, in practice it still seems rather a stale idea to me.

But now it feels like I am at last beginning to understand the implications of this on-line identity crisis.  Take for example the recent news that OpenAI has decided not to release its latest text generator because it feels it is “too good to release” in its current form. The challenge, they say, is that it is now possible to generate new “news” articles that are so compelling and well-structured that it is almost impossible to tell if they are legitimate or not.  Furthermore, it adapts so well to the environment in which it operates that its own inventors are worried about “enabling malicious or abusive uses of the technology”. Are we really at the stage that we’re scared to release the systems we’ve created for fear of misuse? Is the balance between positive impact and potential manipulation of AI-based solutions so unclear? We seem to have moved from worrying about releasing software that might crash to worrying about releasing algorithms that might become something we can’t control.

A related aspect of this shifting of the balance is with regard to fake identities on-line. Again, this is a long story that has been investigated extensively over the years. However,  I was very affected by this article and the website www.ThisPersonDoesNotExist.com. Using machine learning this webpage generate a new fake face each time you reload the webpage. It is extraordinary that they are so convincing and “real”. Is it truly that easy to generate images that would fool just about anyone? Can anyone invent a new persona on demand? It seems so.

For some reason, I find this combination of articles particularly spooky. Probably because what they generate is so lifelike, believable, and uncontrollable….it really makes me think about the question “So how can I trust that anything I see on line or believe that any digital interaction I have is real?”. The short answer is that I can’t.  And that’s something I am having a great deal of trouble coming to terms with.

Share the Post: