AI Made Friendly HERE

Imitative AI and A Trustless Future

One of the supposed easy tasks for imitative AI is helping programmers write, at least, boilerplate code.  Much programming is what we call boilerplate — code that is largely the same from project to project that creates the routine functions that code depends upon.  The first thing any competent programmer does once a project has entered the “we’re really going to do this” phase is to create a tool to generate the boilerplate code for them.  Imitative AI enthusiasts see this an, not unreasonably, think “we can do that faster, better, and more consistent.”  Unfortunately, all imitative AI does in the coding world is destroy the idea of trust.

Imitative AI does not have a model of the world, and thus hallucinates, even when it comes to code.  It is simply guessing about what should come next based on how it has seen code look in the training set, and because of that limitation, it does some very bad things sometimes.  Studies have shown that code that uses imitative AI tends to be of lower quality than code written by human programmers.  More importantly, it can be used to introduce security holes.  

Researchers have shown that imitative AI generates code that contains fake dependencies, which programmers and dependency management systems will then try and download.  If you can find imaginary dependencies that are persistent across instances, you can create the fake dependency, but program it to be malicious and thousands of developers will download.  Security researchers found just that sort of behavior across almost all imitative AI systems for almost every major programming language in use today.  

Part of the reason this is so insidious is that it breaks the trust system that modern programming depends upon to a certain extent.  People download these dangerous, fake libraries because the dependency systems tend to be setup to trust their sources.  Some languages, like GO and .NET protect themselves from malicious packages to a certain extent.  Others, like the package management systems for python and one of the most popular ones for JavaScript, do not.  They depend upon people vetting the code and a certain level of trustworthiness.  This may or may not be the greatest idea, but it generally works.  The ability to mass produce security holes, however, seems like it will require more stringent testing of packages and thus more time spent vetting and less time spent creating.

This, in a programming context, might be a good thing.  But by making these types of security holes easy to generate, they may have taken a small problem and made it an entirely different class of problem.  Sometimes differences in degree really do add up to differences in kind, and this might be one of those circumstances.  A world in which you have to check every program you create for fake dependencies is different than one in which you have to vet the dependencies beforehand.  More importantly, this seems to be something imitative AI is distressingly good at — destroying trust.

Imitative AI is flooding Google and Facebook with nonsense images that lead to clickbait farms.  Search used to, for example, lead to a set of links where the evidence resided and the ownership of could be determines, and thus, to a certain extent, their reliability.  Today, imitative AI summations take precedence and minimize or downplay links out to the sources they use, assuming they haven’t just hallucinated those sources. Deepfake porn and quasi-porn images of celebrities and semi-celebrities infest Google search results.  A politician recently claimed that a recording of him trying to arrange an incestuous tryst was a deepfake.  Experts claimed that the recording was real, but even experts admit that it’s very hard to appropriately classify every potential AI generated deepfake.  If we cannot trust audio and video evidence, then what information can we trust?  

That is a terrible place for a democracy to try to exist.  Yes, there have always been partisan news sources, and yes propogandists have always lied to people, and yes, some people have always believed only what they want to believe.  We have never, at any point, lived in a perfect information environment and have muddled through.  But there have almost always been means of finding proving what really happened, especially since the advent of photos, videos, and audio recordings.  Even when those devices were faked, there have generally been means of determining that they have been altered.  Imitative AI may be the first technology where it is impossible to know if a record was in fact AI generated or manipulated.  It might become literally impossible to trust evidence.

That leaves us with no common way to talk about facts.  You are left with competing source making competing claims with no possible means of tying any of them back to reality.  There is a difference between people not wanting to hear the truth and people having no way, other than their own sense of who to trust, to find out any truth.   No society can long survive the inability to discover facts, to not have a means of saying “yes, this happened”.  People may not want to listen, people may disagree over the meaning of “this happened”, but being able to demonstrate, to prove specific facts, is incredibly important to self-governance. Imitative AI has created a difference of kind with the amount of extremely hard to verify garbage it produces. You cannot convince, you cannot reassess, you cannot debate if you cannot see reality.

Lies have always gotten half-way around the world before truth has gotten its boots on, yes.  But now, lies are stealing truth’s boots on the way out the door.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird