The Bicameral Mind

By NASA/STS-34 – https://danielmarin.naukas.com/files/2015/09/28_Galileo_deployment.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6049516

I remember an English lesson I received in grammar school that serves me well in our current disinformation age. I learned to read news from several sources to decide what was actually true. Through experience I learned which sources were verifiable and trustworthy, but even trustworthy sources could occasionally get it wrong. Reading from several sources, and looking for primary sources of information, allow me at least a slight chance of determining truth.

The Internet and social media have not made it easier to determine the truth. I’m disgusted with the number of people who ask their social media acquaintances for answers that they should be getting from reliable sources, or they should already know. I really don’t think you should ask the Internet if you should take a child to the doctor if they swallowed a bolt, or has a rash.

The Internet lies, and even worse, it changes. We all heard about Edward Welch driving to a D.C. area pizza parlor with an assault rifle to stop a sex trafficing ring being run from the parlor’s non-existent basement. From my personal experience, I distinctly recall one of the US Air Force’s first female combat pilots firing an anti-satellite missile at the Solwind satellite. Wikipedia reports that same event as a man shot the satellite. I’m too lazy to dig up the microfiche of the company newsletter proudly touting that a woman combat pilot fired our missile, so I have no evidence to edit the Wikipedia article.

One of the other things I learned from that time was the design protocol to never trust automation to “fire ordnance”. If, for timing purposes, we required computers to calculate the firing time, we used an array of electromagnetic relays to take the output of multiple computers to arrive at a consensus. Even then, a human initiated the chain of events leading up to the stage separation. On the Inertial Upper Stage, deployed from the Space Shuttle, there is a lever on the side of the booster that turns the booster on. Its impractical to have an astronaut suit up to flip the lever, so there was the equivalent of a tin can over the lever tied to a string (a “lanyard” in NASA-speak) with the other end tied to the Space Shuttle. The atronaut mission specialist would flip the switches to tilt the booster’s cradle in the Space Shuttle and release the springs to push the booster out. As the booster drifted out the string would tighten and flip the lever turning on the booster. Only then would the booster’s computers boot up, calculate the booster’s position, and wait until the booster had drifted far enough away from the Space Shuttle to fire its attitude control jets to turn booster around and fire the first stage.

The US military also had us apply this principle to weaponry. Automation could not initiate anything that could kill a human. A human needs to be holding the trigger on anything that could kill another human.

Same principles of not trusting automation applied to finances. The first NASDAQ trading islands were required to delay the quote information on stocks by several seconds before it reached the bidding systems. This was to discourage feedback loops in automated trading systems. Since then those limits have been eliminated, or just become ineffective. At least once the stock price of a perfectly good company was driven to zero causing a suspension of trading. After a day, the stock returned to its “normal” expected price. The SEC has already commented that high frequency trading, based on nothing but fluctuations of stock prices, is driving out the research driven investors that look at the fundamentals of company like profitability, indebtedness, gross sales.

When it comes to AI in the modern world, these experiences suggest some fundamental rules:

  1. A human must initiate any action that can potentially harm another human.
    • Corollary: Industrial processes that may release toxic materials, even for safety reasons, must be initiated by a human.
    • Corollary: Automation cannot directly trade on the financial markets. Markets are for humans. High speed trading is harmful to the economy.
  2. When automation has indirect control of potentially hazardous processes (such as firing a booster after a human has enabled it), multiple redundant processes must reach a consensus to order an action.

Given the malleability of the web I was really surprised that OpenAI released ChatGPT into the wild without close supervision. ChatGPT has no means of checking the information fed to it but it learns from everything, so we should not be be surprised it now helps feed the flood of misinformation.

A large language model like ChatGPT is nothing but a bunch of nodes doing matrix multiplies. Modern neural networks add an occasional convolution algorithm. There is nothing in the model to drive inference or intuition. It responds purely from the data it was trained on.

A medical DNA testing company recognized the LLM (large language models) were statistical in nature, but they still needed AI to scale up the review of DNA results. Human review of DNA results just wasn’t able to keep up. They wisely created a system to monitor results and periodically manually check the results to retrain the model as needed.

Even now, though, really big LLMs like Bard, ChatGPT, are too big for effective monitoring. We don’t have a way to untrain a model once it has bogus data in it. One way to help a LLM defend itself is to create another LLM that is only trained from proctored data. That second LLM helps trains the first LLM to recognize bogus sources. The proctored LLM will help the owners determine when they need to throw away the big LLM if it strays.

Now the rubber hits the road. A corporation has spent millions training the LLM, so they will be reluctant to just throw it away. Even though the large AI occasionally lies and hallucinates, it is useful most of the time. Legal regulation is doomed to failure, so we must resort to competition where companies with LLM’s advertise how well they monitor their AI. An industry group or even a government agency could rate the AIs for consumer protection — just like the U.S. government publishes ontime performance of airlines and their safety record.


Coding Snippet

Probably as part of a job interview, I came up with modifying heap sort to remove duplicates from an array. I haven’t seen the technique used before, but I have to think lots of undergraduate computer sci majors have come up with the same algorithm, so this exact representation is copyrighted, but feel free to modify it and use it under an MIT style license (shame on you if you attempt to put a more restrictive license on it):

// -*-mode: c++-*-
////
// @copyright 2024 Glen S. Dayton.
// Rights and license enumerated in the included MIT license,
//
// @author Glen S. DaRyton
//
#include <cstdlib>
#include <algorithm>
#include <iterator>

// Delete duplicate entries from an array using C++ heap.
template<typename  random_iterator>
auto deleteDuplicates(random_iterator start, random_iterator stop) -> random_iterator
{
    auto topSortedArray = stop;
    auto theArrayLength = distance(start, stop);
    auto heapEnd = stop;

    if (theArrayLength > 1)
    {
        // Transform the array  into a heap ( O(n) operation (linear) )
        // The range [start, stop) determines the array.
        std::make_heap(start, stop);

        // Perform a heap sort
        // Pop the first element, which is the max element.
        // Shrinks the heap leaving room for the max element at the end.
        // pop_heap is a O(log N)  operation.
        auto prevElement = *start;
        std::pop_heap(start, heapEnd);
        --heapEnd;

        // push the max element onto the sorted area
        // End of the array is the sorted area.
        --topSortedArray;
        *topSortedArray = prevElement;

        // Work our way up. Pop the max element of the top of the heap and write it to the top of the sorted area.
        while (heapEnd != start)
        {
            auto currentElement = *start;
            std::pop_heap(start, heapEnd);
            --heapEnd;

            if (currentElement != prevElement)
            {
                --topSortedArray;
                *topSortedArray = currentElement;
                prevElement = currentElement;
            }
        }
    }

    return topSortedArray;
}

You may find this code and its unit tests at https://github.com/gsdayton98/deleteduplicates.git

Woodpecker Apocalypse

Weinberg’s woodpecker is here, as in the the woodpecker in “If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization” (Gerald M. Weinberg, The Psychology of Computer Programming, 1971).

We’ve put our finances, health information, and private thoughts on-line, entrusting them to software written in ignorance.  Hackers exploit the flaws in that software to get your bank accounts, credit cards, and other personal information.  We protected it all behind passwords with arbitrary strength rules that we humans must remember.  Humans write the software that accepts your passwords and other input.  Now comes the woodpecker part.

Being trusting souls, we’ve written our applications to not check their inputs, and depend upon the user to not enter too much.  Being human, we habitually write programs with buffer overruns, accept tainted input, and divide by zero. We write crappy software.  Heartbleed and Shellshock and a myriad of other exploits use defects in software to work their evil.

Security “experts”, who make their money by making you feel insecure, tell you its impossible to write perfect software.  Balderdash.  You can write small units, and exercise every pathway in small units.  You have a computer after all.  Use the computer to elaborate the code pathways and then use the computer to generate test cases.  It is possible to exercise every path over small units.  Making the small units robust makes it easier to isolate what’s going wrong in the larger systems.  If you have two units that are completely tested, so you know they behave reasonably no matter what garbage is thrown at them, then testing the combination is sometimes redundant.  Testing software doesn’t need to be combinatorially explosive.  If you test every path in one module A and every path in module B, you don’t need to test the combination — except when the modules share resources (the evilness of promiscuous sharing is another topic).  Besides, even if we couldn’t write perfect software doesn’t mean we shouldn’t try.

Barriers to quality are only a matter of imagination rather than fact.  How many times have you heard a manager say spending the time or buying the tool was too much, even though we’ve known since the 1970s that bugs caught at the developers desk cost ten times less than bugs caught later.  The interest on the technical debt is usury.  This suggests we can spend a lot more money up front on quality processes, avoid technical debt, and come out money ahead in the long run.  Bern and Schieber did their study in the 1970s.  I found this related NIST report from 2000:

NIST Report

The Prescription, The Program, The Seven Steps

Programmers cherish their step zeroes.  In this case,  step zero is just making the decision to do something about quality.   You’re reading this so I hope you’ve already made the decision, but just in case, though, let’s list the benefits of a quality process:

  • Avoid the re-work of bugs.  A bug means you need to diagnose, test, reverse-engineer, and go over old code.  A bug is a manifestation of technical debt.  If you don’t invest in writing and performing the tests up front you are incurring technical debt with 1000% interest.
  • Provide guarantees of security to your customers.  Maybe you can’t stop all security threats, but at least you can tell your customers what you did to prevent the known ones.
  • Writing code with tests is faster than writing code without.  Beware of studies that largely use college student programmers, but studies show that programmers using test driven development are 15% more productive.  This doesn’t count the amount of time the organization isn’t spending on bugs.
  • Avoid organizational death.  I use a rule of thumb about the amount of bug fixing an organization does.  I call it the “Rule of the Graveyard Spiral”.  In my experience any organization spending more than half of its time fixing bugs has less than two years to live, which is about the time the customers, or sponsoring management lose patience and cut-off the organization.

So, lets assume you have made the decision to get with the program and do something about quality.  Its not complicated.    A relatively simple series of steps instill quality and forestall installing technical debt into your program.  Here’s a simple list:

  1. Capture requirement with tests.  Write a little documentation.
  2. Everyone tests.  Test everything.  Use unit tests.
  3. Use coverage analysis to ensure the tests cover enough.
  4. Have someone else review your code. Have a coding standard.
  5. Check your code into a branch with equivalent level of testing.
  6. When merging branches, run the tests.  Branch merges are test events.
  7. Don’t cherish bugs.  Every bug has a right to a speedy trial.  Commit to fixing them or close them.

Bear in mind that implementing this process on your own is different than persuading an organization to apply the process.  Generally, if a process makes a person’s job easier, they will follow it.  The learning curve on a test driven process can be steeper than you expect because you must design a module, class, or function to be testable.  More on that later. 

On top of that, you need to persuade the organization that writing twice as much code (the test and the functional code) is actually faster than writing just the code and testing later.  In most organizations, though, nothing succeeds like success.  In my personal experience the developers who learned to write testable code and wrote unit tests never go back to the old way of doing things.  On multiple occasions putting legacy code that was causing customer escalations under unit test eliminated all customer escalations.  Zero is a great number for number of bugs.

Details

  1. Capture requirements with tests.

Good requirements are quantifiable and testable.  You know you have a good requirement when  you can build an automated test for it. Capture your requirements in tests.  For tests on behavior of a GUI use a tool like Sikuli (http://www.sikuli.org/).  If you’re testing boot time behavior, use a KVM switch and a second machine to capture the boot screens.  Be very reluctant to accept a manual test.  Be very sure that the test can’t be automated.  Remember the next developer that deals with your code may not be as diligent as you so manual tests become less likely to be re-run when the code is modified.


Closely related to capturing your requirements in tests, is documenting your code.  Documentation is tough.  Whenever you write two related things in two different places, those two different things will get out of sync and become obsolete in relationship to the other.

It might as well be a law of configuration management:  Any collection residing in two or more places will diverge.

So put the documention and code in the same place.  Use doxygen (http://www.stack.nl/~dimitri/doxygen/) .  Make your code self documenting.  Pay attention to the block of documentation at the top of the file where you can describe how the pieces work together.  On complicated systems, bite the bullet and provide an external file that describes how it all works together.   The documentation in the code tends to deal with only that code and not its related neighbors, so spend some time describing how it works together.  Relations are important.

You need just enough external documentation to tell the next developer where to start.  I like to use a wiki for my projects.  As each new developer comes into the project I point them to the wiki, and I ask them to update the wiki where they had trouble due to incompleteness or obsolescence.  I’m rather partial to MediaWiki (https://www.mediawiki.org/wiki/MediaWiki).  For some reason other people like Confluence (http://www.atlassian.com/Confluence ).  Pick your own wiki at http://www.wikimatrix.org/ .

Don’t go overboard on documentation. Too much means nobody will read it nor maintain it so it will quickly diverge to having little relation to the original code.  Documentation is part of the code.  Change the code or documentation, change the other.

Steps 2 through 7 deserve their own posts.

I’m past due on introducing myself.  I’m Glen Dayton.  I wrote my first program, in FORTRAN, in 1972.  Thank you Mr. McAfee.   Since then I’ve largely worked in aerospace, but then I moved to the Silicon Valley to marry my wife and take my turn on the start-up merry-go-around.  Somewhere in the intervening time Saint Wayne V. introduced me to test driven development.  After family and friends, the most important thing I ever worked on was PGP.


Today’s coding joke is the Double Check Locking Pattern.  After all these years I still find people writing it.  Read about it and its evils at

C++ and the Perils of Double-Checked Locking

When you see the the following code, software engineers will forgive you if you scream or laugh:

static Widget *ptr = NULL;
static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;

// ...
if (ptr == NULL)
{
  pthread_mutex_lock(&lock);
    if (ptr == NULL)
       ptr = new Widget;
    pthread_mutex_unlock(&lock);
}
return ptr;

One way to fix the code is to just use the lock.  Most modern operating systems implement a mutex with a spin lock so you don’t need to be shy about using them:

using boost::mutex;
using boost::lock_guard;

static Widget *ptr = NULL;
static mutex mtx;

//...

{
    lock_guard<mutex> lock(mtx);
    if (ptr == NULL)
       ptr = new Widget;
}
return ptr;

Another way, if you’re still shy about locks, is to use memory ordering primitives.  C++11 offers atomic variables and memory ordering primitives.

#include <boost/atomic/atomic.hpp>
#include <boost/memory_order.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/locks.hpp>

class Widget
{
public:
  Widget();

  static Widget* instance();
private:
};
Widget*
Widget::instance()
{
  static boost::atomic<Widget *> s_pWidget(NULL);
  static boost::mutex s_mutex;

  Widget* tmp = s_pWidget.load(boost::memory_order_acquire);
  if (tmp == NULL)
  {
    boost::lock_guard<boost::mutex> lock(s_mutex);
    tmp = s_pWidget.load(boost::memory_order_relaxed);
    if (tmp == NULL) {
      tmp = new Widget();
      s_pWidget.store(tmp, boost::memory_order_release);
    }
  }
  return tmp;
}

If the check for the lock, though, occurs in a high traffic area, you may not want to pay the cost of flushing the data cache for every atomic check, so use a thread local variable for the check:

using boost::mutex;
using boost::lock_guard;

Widget*
Widget::instance()
{
    static __thread Widget *tlv_instance = NULL;
    static Widget *s_instance = NULL;
    static mutex s_mutex;

    if (tlv_instance == NULL)
    {
        lock_guard<mutex> lock(s_mutex);
        if (s_instance == NULL)
            s_instance = new Widget();
        tlv_instance = s_instance;
    }

    return tlv_instance;
}

Of course, everything is a trade-off. A thread local variable is sometimes implemented as an index into an array of values allocated for the thread, so it can be expensive.  Your mileage may vary.