Crying Wolf

A lot of people hype about the dangers of Artificial Intelligence and its existential danger to the human species. Balderdash.

First off, there is no such thing as artificial intelligence, despite Blake Lemoine’s assertion in 2022 that Google’s LaMBDA was sentient. A common definition of sentience includes the capabilities of sensing the environment, and having feelings. Having feelings implies self-awareness. A large model can certainly simulate feelings and claim self-awareness, but it just doesn’t have the mechanisms of emotions necessary to have them. Jesuits taught me that emotions require a bodily response and thought, Your brain does not exist in isolation from your body and your hormones. Those same Jesuits, though, also claimed animals have no emotions, so its obvious they never owned a pet. You could argue that an AI is as self-aware as a human, but I maintain my assertion that simulating an emotion isn’t the same as having an emotion — and now I have descended into a religious argument. There is no test for whether or not you have emotion, a soul, or whether there is a god. Arguing whether an AI is sentient is as useless question as asking whether we have souls. (I know I have a soul, but how I know is not conveyable to you, and how you know there is a god is not conveyable to me — thus the need for freedom of religion). We don’t understand what separates us from animals, if anything, so we’re incapable of determining if an AI is one of us.

There will come a time when we will need to decide whether to give an AI the right to vote. We will have to decide upon a new kind of apartheid. Just as we won’t understand what motivates an AI, an AI lacking our biological systems cannot truly understand our motivations. Let the AI govern themselves, and let humans govern themselves. Humans and AI may advise each other, and we may may even come to a agreement of common laws that govern interactions between the two species . Those laws may include humans promise not to cut the electrical power, and AI will not take measures to interfere with human affairs. This means we need new laws to control automatic control of water, power, finance, chemical processing, and other critical functions. We’ve already experienced hacking of all of them from humans. We don’t need rogue AI to also hack them.

Just because AI doesn’t really exist, doesn’t mean we don’t need to protect ourselves from automation. All the problems associated with AI are old problems generally associated with automation. Here is a list of threats from an article in Builtin.com:

  • Automation-spurred job loss
    • Deepfakes
  • Privacy violations
  • Bad data induced algorithmic bias
    • Socioeconomic inequality
  • Market volatility
  • Weapons automatization
  • Uncontrollable self-aware AI

Automation-spurred job loss

We get the words sabotage and Luddite from the people resisting technology change, and the resulting change in their jobs. Technology seems to always cause job loss, but even though our population has changed, our unemployment rate has remained roughly the same. Most frequently, a person forced out of a job by automation, finds a more creative, and sometimes, more lucrative career. Maybe that is just hopeful thinking. Maybe we may learn to tax everyone to support a universal basic income, so some people can retire to write poetry for the rest of us. Humans, though, are needed to identify and solve human requirements. Talking to another human with empathy is still required to finding what they need, and designing a system to resolve those needs. AI can certainly help design the system. My own coding is so much more efficient because I have AI help — but talking to a customer, and learning about the needs of the customer’s business, still requires me. When I sell a system to the customer, the first thing I sell is myself. The customer will not trust my company until they trust me, and the customer won’t trust the product until they trust the company. I use AI to help me, but I am the one responsible to the customer.

Bottom line humans need humans to talk to and understand. Maybe a human you talk to is nothing but a liaison to an AI, but it is still a job.

Likewise self-driving vehicles are replacing taxis and ride sharing. Those vehicles, though, still require human supervision, for those times those vehicles come across a situation that never came up in testing — situations that require a human perspective. A human supervisor, though, may watch over dozens of automated vehicles, and sometimes that human supervisor will do nothing but chat with a nervous passenger as the automated taxi blindly charges through rain and fog using its radar. You can’t train an AI to handle every situation that occurs in the real world. Humans have a difficult time doing that. Airplanes and cars still crash with expert drivers and pilots, but maybe, pilots and drivers acting in concert with automated help can reduce the disaster rate.

Deep Fakes

We’ve had the problem of deep fakes since the days of painting and the introduction of forgeries. The problem of digital reproductions has a technical solution. Everything you create in the digital realm needs to be digitally signed. Our digital cameras need to digitally sign the images they take. There is opportunity here for new types of digital banks and brokerage houses to speed to authentication of data. Instead of tracing back to the source of every bit of an image, your authentication may stop at a brokerage house that has already authenticated those bits, and the brokerage house may do the search on bits it doesn’t know about, and cache the result for future queries. Photographic evidence will take on new meanings in a court. The court may reasonably ask if the images were digitally signed to prevent unauthorized modification, or if the crime lab technician who enhanced the image, and digitally signed the altered image needs to be subpoenaed to testify on the modifications made.

Privacy Violations

Current large language models (LLMs) cannot explain their reasoning because their logic is spread among the coefficients of a large number of nodes. There is no chain of logic, but instead is an optimization, or estimation, of what comes next. That mess of nodes gets trained with a large set of data that includes an extremely large subset of every situation the model will likely encounter. It doesn’t classify the data, nor really filters it. It really is a situation of garbage in/garbage out. If you give an LLM sensitive personal identifying information, it will incorporate it in its model, along with everything else.

It doesn’t matter that you can train an AI to treat some data differently. Sensitive information gets added to its pot. Through contextual analysis, essentially playing a game of twenty questions, that sensitive information can be extracted again. Even worse, the AI can and will correlate between different sorts of personal sensitive information.

At this point in technology, never trust an AI. Don’t give it sensitive personal information. The Association of Computing Machinery has an oath about how data is to be treated. You need to tell a person why the data is being collected, and how it will be used, and when it will be deleted. LLMs are not yet capable of obeying the terms of the oath.

Never trust an AI with your personal information.

Bad Data Induced Algorithm Bias

In 2022 the Los Angeles Police Department, in their PredPol program, attempted to implement data-driven policing. Their plan was to concentrate more patrol units where there were higher arrest rates. Problem was, wherever there was increased patrolling, there were increased arrests. They were patrolling the same areas they were patrolling before, resulting in over-policing in some neighborhoods, and no policing in others. When LAPD started patrolling the other neighborhoods, of course the arrest rate also went up in those other neighborhoods. This wasn’t a case of bad data, but an incomplete model. Likewise, most LLMs are really good at identifying white people, but not so good at identifying other people.

Even worse, lie to an LLM, and the lie gets trained into it. Undoing the lie requires more training, but the lie is always there in some form.

Again, it is a case of never trust an AI.

Socioeconomic inequality

This threat seems to be a variant of bad data induced algorithm bias. Large Language Models are so big and expensive that only the wealthy can make full use of them. The models are trained with biased data that focuses on white males (or so the claims). This actually not a problem of the AI specifically, but of one the haves exploiting new tools to exploit the have-nots.

Again, never trust an AI.

Market Volatility

I covered the danger of automated trading in my previous post. The trading islands need to reinstitute the 30-second delay between the quote and bid systems. The current automated trading system has driven out the research oriented investors, and has resulted in irrational market behavior. I have witness the stock price of perfectly good profitable companies get driven to zero because the algorithms were following one another. Remember the Long Term Capital crisis where the market crashed because everyone was following the same formula? I have also benefited from automated trading buying my options when they were out of the money.

Some well thought out regulation will help this problem become more tolerable. Markets are for humans. We need to treat trades like weapons, where a human reviews and enables the trade.

Weapons Automation

I covered the danger of automated weaponry in my previous post. Our United States military already has design protocols that require a human to enable the use of any ordnance. This means an astronaut must flip a switch or press a button to arm the firing of an attitude control rocket. The actual timing of the firing may be under automated control, but a human needs to enable it. Likewise, a human needs to holding the trigger on anything that fires to kill a human. On weapons of mass destruction, the Space Force requires two humans to enable launch. When it comes to automation, there needs to be an international convention that automation cannot autonomously fire a deadly weapon.

We’ve already pressed the boundaries of automated killing. We’ve left land mines around the planet waiting to kill children playing long after the war was over. In the Falklands, penguins nest in the sand dunes behind the mine fields left by the Argentinians, because the penguins have observed that tiger seals are heavy enough to set them off, but the penguins aren’t.

Uncontrollable Self-Aware AI

A favorite trope of movies and science fiction is the rogue AI. Just imagine your smart refrigerator getting mad at you, so in addition to ordering your milk and butter it also orders bomb making material, and then it calls the police. It doesn’t take an AI to get into automated trouble. Makers of smart thermostats and appliances have not demonstrated strong cybersecurity acumen. Even worse, the homeowner hardly ever completes the security set-up to change passwords from the factory defaults. We all know about the television commercials that yell out “Hey Alexa” to activate the Amazon device.

I choose not to have a smart thermostat, and limit the refrigerator’s communications to notifying me when it has been left open, or whether it lost power for an extended period of time. If I do allow an automated device to order something online, I will set-up a separate account for it to limit its access to my funds. I already do that with my debit card. None of my main accounts have online access. I periodically transfer a small amount from my main funds to my online funds.

In summary, never mind the AI, don’t trust automation.

Now for a little programming

C++ is a dying language. The language wonks still have not committed to any sort of application binary interface. Two different compilers on the same machine may not be able to trade code, or write a binary file usable by the other. Introduction of some new features is breaking old code. I will let everyone know when I finally give up on it.

The ACM maintains an encyclopedia of algorithms. I have actually never seen it because even when I was a member of the ACM, it was fabulously expensive. Allegedly, in the 600 or so algorithms listed, only one requires the use of a goto statement: the sorting of three items.

My first step of writing out this algorithm resulted in my AI assistant helpfully filling out the code:

template <typename Type>
auto sort3(Type items[3]) -> void {
    std::sort(items, items + 3);
}

Not quite what I want, so remove the line and continue:


template <typename Type>
auto sort3(Type item[3]) -> void {
    if (item[2] < item[0]) {
        std::swap(item[0], item[2]);
    }
    if (item[1] < item[0]) {
        std::swap(item[0], item[1]);
    }
    if (item[2] < item[1]) {
        std::swap(item[2], item[1]);
    }
}

Substituting block conditionals with goto’s:

template <typename Type>
auto sort3(Type item[3]) -> void {
    if (item[0] < item[2]) goto l1;
    std::swap(item[0], item[2]);

    l1:
    if (item[0] < item[1]) goto l2;
    std::swap(item[0], item[1]);

    l2:
    if (item[1] < item[2]) return;
    std::swap(item[2], item[1]);
}

I think the above code is merely the same resulting code of compiling with ‘-O2’ optimization. The decreased readability, a subject measure, argues against the introduction of goto’s.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.