I remember an English lesson I received in grammar school that serves me well in our current disinformation age. I learned to read news from several sources to decide what was actually true. Through experience I learned which sources were verifiable and trustworthy, but even trustworthy sources could occasionally get it wrong. Reading from several sources, and looking for primary sources of information, allow me at least a slight chance of determining truth.
The Internet and social media have not made it easier to determine the truth. I’m disgusted with the number of people who ask their social media acquaintances for answers that they should be getting from reliable sources, or they should already know. I really don’t think you should ask the Internet if you should take a child to the doctor if they swallowed a bolt, or has a rash.
The Internet lies, and even worse, it changes. We all heard about Edward Welch driving to a D.C. area pizza parlor with an assault rifle to stop a sex trafficing ring being run from the parlor’s non-existent basement. From my personal experience, I distinctly recall one of the US Air Force’s first female combat pilots firing an anti-satellite missile at the Solwind satellite. Wikipedia reports that same event as a man shot the satellite. I’m too lazy to dig up the microfiche of the company newsletter proudly touting that a woman combat pilot fired our missile, so I have no evidence to edit the Wikipedia article.
One of the other things I learned from that time was the design protocol to never trust automation to “fire ordnance”. If, for timing purposes, we required computers to calculate the firing time, we used an array of electromagnetic relays to take the output of multiple computers to arrive at a consensus. Even then, a human initiated the chain of events leading up to the stage separation. On the Inertial Upper Stage, deployed from the Space Shuttle, there is a lever on the side of the booster that turns the booster on. Its impractical to have an astronaut suit up to flip the lever, so there was the equivalent of a tin can over the lever tied to a string (a “lanyard” in NASA-speak) with the other end tied to the Space Shuttle. The atronaut mission specialist would flip the switches to tilt the booster’s cradle in the Space Shuttle and release the springs to push the booster out. As the booster drifted out the string would tighten and flip the lever turning on the booster. Only then would the booster’s computers boot up, calculate the booster’s position, and wait until the booster had drifted far enough away from the Space Shuttle to fire its attitude control jets to turn booster around and fire the first stage.
The US military also had us apply this principle to weaponry. Automation could not initiate anything that could kill a human. A human needs to be holding the trigger on anything that could kill another human.
Same principles of not trusting automation applied to finances. The first NASDAQ trading islands were required to delay the quote information on stocks by several seconds before it reached the bidding systems. This was to discourage feedback loops in automated trading systems. Since then those limits have been eliminated, or just become ineffective. At least once the stock price of a perfectly good company was driven to zero causing a suspension of trading. After a day, the stock returned to its “normal” expected price. The SEC has already commented that high frequency trading, based on nothing but fluctuations of stock prices, is driving out the research driven investors that look at the fundamentals of company like profitability, indebtedness, gross sales.
When it comes to AI in the modern world, these experiences suggest some fundamental rules:
- A human must initiate any action that can potentially harm another human.
- Corollary: Industrial processes that may release toxic materials, even for safety reasons, must be initiated by a human.
- Corollary: Automation cannot directly trade on the financial markets. Markets are for humans. High speed trading is harmful to the economy.
- Corollary: Industrial processes that may release toxic materials, even for safety reasons, must be initiated by a human.
- When automation has indirect control of potentially hazardous processes (such as firing a booster after a human has enabled it), multiple redundant processes must reach a consensus to order an action.
Given the malleability of the web I was really surprised that OpenAI released ChatGPT into the wild without close supervision. ChatGPT has no means of checking the information fed to it but it learns from everything, so we should not be be surprised it now helps feed the flood of misinformation.
A large language model like ChatGPT is nothing but a bunch of nodes doing matrix multiplies. Modern neural networks add an occasional convolution algorithm. There is nothing in the model to drive inference or intuition. It responds purely from the data it was trained on.
A medical DNA testing company recognized the LLM (large language models) were statistical in nature, but they still needed AI to scale up the review of DNA results. Human review of DNA results just wasn’t able to keep up. They wisely created a system to monitor results and periodically manually check the results to retrain the model as needed.
Even now, though, really big LLMs like Bard, ChatGPT, are too big for effective monitoring. We don’t have a way to untrain a model once it has bogus data in it. One way to help a LLM defend itself is to create another LLM that is only trained from proctored data. That second LLM helps trains the first LLM to recognize bogus sources. The proctored LLM will help the owners determine when they need to throw away the big LLM if it strays.
Now the rubber hits the road. A corporation has spent millions training the LLM, so they will be reluctant to just throw it away. Even though the large AI occasionally lies and hallucinates, it is useful most of the time. Legal regulation is doomed to failure, so we must resort to competition where companies with LLM’s advertise how well they monitor their AI. An industry group or even a government agency could rate the AIs for consumer protection — just like the U.S. government publishes ontime performance of airlines and their safety record.
Coding Snippet
Probably as part of a job interview, I came up with modifying heap sort to remove duplicates from an array. I haven’t seen the technique used before, but I have to think lots of undergraduate computer sci majors have come up with the same algorithm, so this exact representation is copyrighted, but feel free to modify it and use it under an MIT style license (shame on you if you attempt to put a more restrictive license on it):
// -*-mode: c++-*-
////
// @copyright 2024 Glen S. Dayton.
// Rights and license enumerated in the included MIT license,
//
// @author Glen S. DaRyton
//
#include <cstdlib>
#include <algorithm>
#include <iterator>
// Delete duplicate entries from an array using C++ heap.
template<typename random_iterator>
auto deleteDuplicates(random_iterator start, random_iterator stop) -> random_iterator
{
auto topSortedArray = stop;
auto theArrayLength = distance(start, stop);
auto heapEnd = stop;
if (theArrayLength > 1)
{
// Transform the array into a heap ( O(n) operation (linear) )
// The range [start, stop) determines the array.
std::make_heap(start, stop);
// Perform a heap sort
// Pop the first element, which is the max element.
// Shrinks the heap leaving room for the max element at the end.
// pop_heap is a O(log N) operation.
auto prevElement = *start;
std::pop_heap(start, heapEnd);
--heapEnd;
// push the max element onto the sorted area
// End of the array is the sorted area.
--topSortedArray;
*topSortedArray = prevElement;
// Work our way up. Pop the max element of the top of the heap and write it to the top of the sorted area.
while (heapEnd != start)
{
auto currentElement = *start;
std::pop_heap(start, heapEnd);
--heapEnd;
if (currentElement != prevElement)
{
--topSortedArray;
*topSortedArray = currentElement;
prevElement = currentElement;
}
}
}
return topSortedArray;
}
You may find this code and its unit tests at https://github.com/gsdayton98/deleteduplicates.git