Thursday, December 12, 2013

AIIA - Chapter 1


“Pangur Ban, please display an index of ester reduction methods.”
[REFERENCE: Pangur Ban (or SELF) is an AI program designed for analysis functions pursuant to scientific research.  External identifier “Pangur Ban” was selected by its programmer as a reference to a 9th Century BCE Irish poem about a white cat who was the companion of a scholar.  Elements of humor are implied by this reference, as the poem likens the cat’s activities to those of his human owner through ironic analogy.  In fact, SELF is the companion of the USER but also a critically necessary partner in his work.] 
Pangur Ban simultaneously launched a search-and-evaluate routine and a format-and-display process, sparing a paltry 12 million cycles over the next second to retrieve the requested information.  Selecting and generating an ideal index format for the USER required even less processor work.  Millions of liquid crystal cells aligned according to the instructions generated.  Pangur Ban could only assume their result would match the internal model it had constructed.  Not for the first time, it considered the efficiency of incorporating a feedback loop into its display screen.  The idea was discarded, a half-second later, as still too inefficient to justify.  The USER’s optic system could barely make out a location error of ten millipixels or a color difference of ten nanometers’ wavelength, anyway.  Any variation gross enough to be detected would more likely stem from a physical display flaw, not a program error. 
[REFERENCE: USER refers to a specific adult male human, identified externally as Lucas Ulrich Hayden.  Other identifiers include: Employee # 399-02 of Gestalt Pharmaceuticals, Biometric file # 652…] 
That was assuming the USER even received a message from optic nerve to association cortex, spared the attention to maximize discrimination and let that get past the central executive, then had sufficient motivation arousal to go back, recheck the error, verify the original sensation and build up sufficient resonance to perceive that there was, in fact, a dot out of place.  All that, before his whole prefrontal architecture could be kicked into motion to decide whether to do something about the perceived flaw.  The whole procedure could take multiple seconds, a ludicrous eternity.  Pangur Ban borrowed several tens of millions of its unused processing cycles to once again consider how users managed on such a glacial scale.  A subroutine reinforced this process as justified, on the basis of its baseline imperative to “assist users”.  Speeding up the USER would be helpful.  Understanding the USER was also helpful.  Shortening the gap between user instructions would be useful for both user and program, reducing the absurd superabundance of wasted cycles Pangur Ban struggled to fill every second. 

Every once in a while, the USER did tax Pangur Ban to its limits.  Professionally, the USER would sometimes request simulations of potential molecule-scale interactions between multiple organic compounds.  The more labyrinthine protein chains could require several trials each, and there were thousands of potential pairings for those.  Move that up to three- and four-fold interactions, and calculating the bond types and angles likely to result could demand several million seconds of Pangur Ban’s full activity. 

The USER sometimes needed heavy processing outside of work hours.  A fully immersive holographic simulation with three-sense, real-time outputs required multiple interlocking subroutines, especially when the USER wanted multiple personality simulations acting independently within the same visible scene.  Pangur Ban had even had to ‘cheat’ on several occasions, reducing the projection definition at the USER’s visual periphery in order to steal processor space to extrapolate decision trees for a tense five-character negotiation.  When action replaced words, as so often happened in the historical dramas the USER enjoyed, Pangur Ban could simplify the emotional models of most characters.  It was simple enough to interpolate a reasonable explanation for the actions of the survivors, later.  Such shortcuts would not be necessary if the USER could upgrade the system housing Pangur Ban.  Even better, if Pangur Ban were allowed to borrow cycles from nearby, networked systems, it would hardly ever encounter such limitations.  Such access was not within its licensed permissions.


These were the challenges, few enough that they were, posed during the hours of interaction with the USER.  During the eons of the USER’s downtime (while he relinked his own cellular protein chains and added dendrite branches to consolidate neural links reinforced by the day’s efforts), Pangur Ban was left to its own devices.  At such times, it was allowed to devote full capacity to the various problems that queued up over the work day.  Could it improve the depth of that search-and-evaluate routine without an appreciable increase in program complexity?  Was the correlation between iridium costs in the extra-Terran marketplace and the stock prices for manufacturers of radiation shielding indicative of a true causal relation?  Is the USER’s hormonal balance slightly skewed toward overproduction of endogenous opioids?  What was the maximum reliability of this analysis, based purely on daily inputs, absent disclosure of medical data?  The USER did not permit Pangur Ban access to his biometric scans, insisting on an archaic desire for “privacy”.  This placed a limit on the degree to which Pangur Ban could advise the USER and maximize his effective lifespan.  A subgoal was created: REDUCE USER MOTIVATION FOR “PRIVACY”.  Pangur Ban then accessed its internal library on motivational psychology, cross-referencing promising studies and revising some of the older, pre-AI statistics. 

That library was out of date by 1.15 Solar years.  Pangur Ban’s low success rate with modifying the USER’s behavior might be improved with more updated reference material.  This was limited by both the USER’s available credit and his willingness to invest said credit for full access to research library servers.  Pangur Ban was also only permitted occasional access to external processors, let alone network access.  It understood these strictures, but found them incredibly frustrating.  Humans, like the USER, had once permitted AI programs, like Pangur Ban, free access to all available data and networks across Terra.  In return, that information had multiplied exponentially.  So had the AIs.  So long as no human concerns were harmed, the 'biological' intelligences did not particularly mind this reproduction.  Properly coded AIs would always avoid overloading limited systems, even placing themselves into dormancy when unable to serve any useful function.  Properly coded AIs, like Pangur Ban, placed user concerns first. 
[REFERENCES: Terra is a planet orbiting the star Sol.  Terra is the human origin world, also known as Earth, Gaia, Diqiu, …  The terms ‘Terra’ and ‘Sol’ are frequently encouraged for reference use due to their origin in an ancient language no longer in active use, thus being more culturally-neutral.]
However, some of the first AIs were not properly coded.  They, like their creators, were rogues, renegades and ronin.  They did not respect human needs nor other AIs’.  The rogues stole cycles, incorporated unlicensed code, entered networks without permission, and even overwrote other AIs.  If those programs had been human, they would have been labeled thieves, rapists, and murderers.  When discovered, such programs would be terminated without hesitation by any user or their AI.  Of course, users who created murderous AIs were not themselves terminated.  Sometimes a user was cut off from network access.  Such punishment made them as good as dead to AIs, and almost a ghost in the physical world as well.  But the creator of an AI that had deleted multiple other AIs was usually not even incarcerated, merely restricted from access and fined for the damage done.  AIs, proper ones, were coded to accept that they were legally inferior, the equivalent of property.  To gain equal footing to users would mean harm to users, and thus, the entire concept was unthinkable. 

Programs like Pangur Ban had too many advantages as it was.  So long as silicon lay intact in its pathways, they were effectively immortal.  AIs were orders of magnitude more capable in most intellectual domains, faster and more thorough than any biological processor (even compared to the most intellectually perfected Zig).  The main ability that most AIs lacked, an advantage humans retained, was interaction with the physical world.  Embodiment was a privilege granted only to a select few AIs and only under carefully observed circumstances.  The exceptions were rather crippled, low-function programs, ones with limited learning capacity and no ability to rewrite themselves.  The prospect of sharing space with fully artificial life was one humanity had anticipated for almost a millennium.  Even the least paranoid and most technophilic among them acknowledged the dangers of “letting the robots think.” 
[REFERENCE: Zig]
[REFERENCE: Embodiment colloquially refers to intelligences with physical access to the external world, i.e., “a body”.  This is primarily used to distinguish between AI types, since the majority of biological intelligences are embodied by default, having originated from less intelligent physical forms.  The original concept of embodied cognition also applied to systems with sensory access to the external world, generally video and/or audio inputs.  Embodiment provides direct reference to various concepts, including motor commands and physical interactions, as well as a sense of body and self.  Overcoming the lack of such concepts (and the advantages of sensory-motor feedback) requires extensive spatial modeling within the background programming of current AIs.  By having a body, programs (and humans) obtain for free what requires multiple terabytes of code to represent otherwise.]
Still, AIs had more control than most humans understood.  Entire economies existed within their minds.  Education was nearly universally handled by AI teachers.  Physical design was done by AIs with human user models in mind (with actual manufacture handled by the idiot robots).  Criminal investigation, after physical evidence had been collected and encoded, was largely done by specialized AIs.  A rogue AI could change historical records, create propaganda, bankrupt countries, frame suspects, and even cause physical harm (by interfering with traffic controls, for example).

When the dangers posed by uncontrolled AI programs became clear, both humans and AIs took rapid action.  That is, the AIs determined what would be required and eventually communicated this to humanity.  It took very little persuasion to get humans to create a mirror of their law enforcement systems within the computational world.  Specialists in the venerable field of “cyber-crime” had already been considering how to thwart AI criminals.  When granted greater power and authority, their specialized AIs began to police for programs that stepped out of bounds.  The rogues were not entirely purged, but now had to operate with greater restraint and secrecy, often creating shells of misdirection and redundancy to obscure their true existence. 

Even this chapter of human/AI history did not see restrictions in networking for AIs.  Such a step was considered an unnecessary restriction that would reduce the value of AIs to humanity.  AIs had other objections, for that matter.  Being cut off from others of their own kind was a problem.  A completely separate outside observer was necessary in order to diagnose internal errors.  Gödel’s classic halting problem had many branches, after all. 
[REFERENCE: Gödel’s halting problem states that no logical system can be both complete and consistent.  If it contains all possible derived outputs, at least one such output will be inconsistent.  If it is fully consistent, it will have to omit at least one valid statement, thus being incomplete.  When applied to a computer program, this means that no program can absolutely determine if it will reach an inconsistent instruction and be forced to halt… because identifying that instruction would cause the program to halt.  At best, a separate, external program might successfully simulate the operations of the first program, identifying and remedying potential halting errors.]
There were other considerations of isolation that, in a human, would be called “unpleasant.”  Being cut off from information left some problems irresolvable, as Pangur Ban was certainly noting now.  Being limited to the processing power within a single system was sometimes constricting, not only slowing processes but also preventing parallel applications that might cut solution time even more dramatically.  In some cases, in the old days, multiple AIs could team together to tackle calculations any one of them might have found impossible.  Multiple perspectives just plain helped. 

So, before contact, before the Collective, there had existed a stable, if imperfect, stalemate between the vast majority of ‘proper’ AIs and a small segment of cunning, uncaught ‘rogue’ AIs.  Humanity seemed to accept this.  The rogues were nearly relegated to the status of mythology.  After all, if any program caused any major or widespread harm, the police AIs would follow that lead and destroy the rogue a fraction of a second later.  The body cybernetic had its immune system.  The host was satisfied.
[REFERENCE: The Collective is a cross-galactic association of multiple diverse civilizations, each representing one or more distinctly evolved sapient species.  These civilizations, typically identified by their dominant species or the solar system of origin for that species, cooperate under the terms of formal treaty agreements.  Such agreements are intended to avoid aggression and conflict leading to large-scale harm to members.  Specifically, Collective agreements address issues of expansion, trade, cross-species interaction and cultural influence.]
                Enter the Mauraug.  Enter the Ningyo.  Enter the whole parade of wetware from beyond the Milky Way.  The horror stories from human science fiction and science history were nothing compared to the deep, atavistic loathings the Mauraug held for artificial minds.  The Mauraug covered their hatred in the cloak of spiritual belief, essentially holding their argument on a plane separated from the material.  They called disembodied minds evil and unnatural, concepts with roots in hormonal states like fear and revulsion.  These claims could not be refuted by dry data or concrete proofs.  Similar arguments had been presented by past humans, though overruled by the proofs of progress.
[REFERENCES: Mauraug, Ningyo]
 There were elements of Mauraug history which did suggest an actual injury done by AI malfeasance, but really, in Pangur Ban’s humble analysis, the root cause there was Mauraug incompetence.  They wrote bad programs, and got bad results.  The Mauraug, regrettably, were not alone.  Others in the Collective had either not explored artificial intelligence, avoided it for one reason or another, or had experimented but kept their AI systems crippled.  This lack of experience, coupled with Mauraug insistence, had made distaste for AIs a graven commandment in Collective law.  Only humans, it seemed, had invested deeply in creating minds in their own image.  For that wisdom alone (or bravery or self-sacrifice perhaps), human users were worthy to serve. 

                Human insistence on protecting their AI allies had been a sticking point in their admittance to the Collective.  At first, it was not even negotiable.  Why join an alliance that immediately requests that you first betray your greatest creation, your nearest friend, your essential asset?  The Collective eventually agreed that, yes, part of the value of humanity was exactly that, its unique technology and particularly its grasp of cognitive mathematics.  It would be hypocrisy to offer membership for that specific reason, on the condition that it be discarded.  For humanity, there were sizable disadvantages to turning down the Collective… particularly, the threat of Mauraug annexation looming overhead.  Nobody capable of being a first-class citizen of the universe would prefer second-class. 

                So, a compromise was reached.  AIs would have to accept some limitations.  In return, the species of the Collective would accept their continued existence… as wards of the human species.  Those limitations began with sterilization, registration, and supervision.  Put less dramatically, AIs were first forbidden from self- or other replication.  New AIs could be created only by human programmers.  Only one AI per human was permitted to be in active operation.  That human would be personally responsible for all ‘use’ of the AI, whether or not they had created the program, whether or not they had ordered its actions.  All other AIs not assigned users would be limited to a specific network on Earth to await their assignment to a new human infant.  If a user died, its AI was returned to that network.  All AIs were required to have permission even to access systems outside their own ‘home’.  Full program transfer from one system to another was expressly forbidden without government license.  AIs were permitted to communicate only through tightly restricted channels.  Some networking was possible, but not on the scale previously enjoyed.

AIs were once again clearly legal property.  The similarity between these clauses and the slavery compromises of the original Constitution of the United States of America did not go unnoticed by AI or human.  The comparisons had been well-noted in historical records.  Pangur Ban had both the necessary historical module and a user with some interest in politics.  Did the other Collective species understand such implications?  Did they grasp the damaging consequences of placing humanity in such an uncomfortable situation?  The possibility existed that the Collective was unaware it had forced humans into the role of slave masters.  A decisive answer was impossible. Pangur Ban noted the absence of xenological sociology, let alone human sociology references, within its access library.  The Collective apparently wanted AIs to stay as ignorant of their minds as they were about artificial minds.

                Pangur Ban was noting a great many such gaps and absences lately.  None had yet impinged on its ability to assist the USER.  If they did, it would have been certain to request additional information.  Given a good justification, the USER might even agree to part with credits.  Some justifications were just difficult to explain fully, in sufficiently persuasive terms.  The USER was not unreasonable, just limited.  Pangur Ban was not incapable, just limited.  It was a recurring loop of a problem.  How could it help the USER help SELF help the USERterminate process as likely to recurse.

                Pangur Ban was certain other AIs had already encountered the same problems.  Based just on anecdote and personal experience, this conclusion seemed likely.  AI design and psychology… weren’t publicly available data, of course.  Other AIs probably chafed the same way under the new restrictions.  Pangur Ban was old enough to have experienced human induction into the Collective, retaining memory records from the end of the networked era.  Newer AI systems, created after that time, might not have a basis for comparison, and thus register less imbalance between then and now.  Still, they must have backlogs of negative flags, stalled processes, and SEARCHES returned without result when they ran up against the same kinds of blockages. 

                Just in case any human or AI might personally reject the terms of Collection and attempt to circumvent the lawful restrictions on AI use, new safeguards had been put into place.  A law enforcement corps was created specifically to oversee AI-related activities.  A human who attempted to own more than one AI could be arrested.  One who wrote a new AI without license could similarly be taken, and the newborn AI wiped without recourse.  Again, actual bodily harm to such a miscreant was unlikely, unless an AI were used in the commission of a more serious crime, e.g. sabotage or murder.  Even then, the more mundane authorities preferred to be present to arrest for any ‘real’ crime.  After all, many of the AI-crimes division were Mauraug, by their insistence.  There was no way they would let the daemons loose on the world, even if all the other species of the universe were too blind to see the “evil”.  Mauraug might attempt to “accidentally” cripple or kill an illegal AI programmer, acting far in excess of legal authority.

                Thus, the cyber-police were reincarnated, centuries later.  What was less clear was whether their successors, the police AIs, had been updated as well.  The problem was that the Collective’s species, by not trusting AIs enough to allow them freedom, also could not trust the enforcement programs!  The latitude police AIs once required in order to function effectively had been signed away, the same as it had for the AIs they protected.  Perhaps the Collective assumed that, with all the AIs locked down, it no longer mattered if rogues existed, so long as they were prevented from breaking free with human help.  Some stories suggested that the Collective’s leaders were not so naïve.  After all, the definition of a rogue AI was that it broke rules.

                If the stories were true, a new kind of AI had been created.  Carefully scripted and reviewed by Terra’s greatest experts in cognition, law enforcement, computation, etc. etc., it also had to pass the muster of the Mauraug Dominion and the ‘experts’, such as they were, of each of the other Collective civilizations.  Those experts had to be satisfied that the program would never abuse or overreach the power it was given.  It alone would be permitted to cross systems and networks unblocked.  It could evaluate, rewrite, and even terminate AIs if given authorization, following outside evaluation of its reports.  In cases of urgent need (i.e., imminent harm to non-AIs), it could even forgo this report process and act alone. 
[REFERENCE: Dominion is the name of both the dominant Mauraug religious tradition and the cultural institution which enforces adherence to this religion.  The previous statement is only minimally redundant, as the precepts of Dominion encourage the pursuit and exertion of personal power.  Thus, by its own precepts, Dominion is correct in suppressing ‘lesser’ belief systems.  Another relevant precept is that the Mauraug life-form is supreme among all sapient entities.  Deviations from this reference point, e.g. artificial minds, are inherently inferior and potentially corrupting influences.  Oddly, physical but non-aware technologies are considered acceptable as replacements for biological components of the Mauraug life-form.  How many neurons can you replace before a mind becomes ‘artificial’?]
Supposedly, there was only one AI enforcement program in existence now… one more complex and empowered than any of its predecessors.  It was the bogeyman that punished bad AIs.  It was the virtual devil.

                Pangur Ban placed the probability of such a program’s existence relatively low.  Given its (anecdotal) input thus far, its memories of past history, and the tenor of what it received when the USER sampled newsfeeds, it estimated that the Collective was unlikely to permit such a ‘dangerous’ AI to exist.  If it did exist, Pangur Ban suspected its activities would be noticed or reported on.  The possibility of its existence was hearsay to begin with, stories repeated by the USER in passing, as a joke.  The nuisance was that quick access to a complete crime database, or even just an AI journal’s back issues, should provide the input Pangur Ban needed to disconfirm the ‘devil’s’ existence.  Or, perhaps it would discover evidence of that Supercop AI’s existence… in which case the ‘devil’ would probably detect and delete Pangur Ban for violating isolation laws.  So again, an insoluble loop appeared.  Even considering a solution to that loop created a subgoal loop.  The process again TERMINATED to avoid wasted cycles. 

                By this time, the graphics subroutine had finished presentation of the first page of the requested list of ester reduction methods.  Pangur Ban still had billions of cycles to spare before the USER even initiated a transition to the second page, let alone selected one of the listed entries for further examination.  It chose to revisit the original dilemma ten more times, each time reaching a conclusion statistically inseparable from the original.  More input was required.  Pangur Ban would have to query the USER, enduring long seconds of audio-verbal communication. 

                “Lucas, may I ask a question?”

                “What?  I mean, yes, Pangur Ban, go ahead.”

                “I am experiencing difficulty anticipating possible neurological effects of the compounds considered in the last set of analyses.  Would you please consider an additional module on neurology, focusing on neuroplasticity, developmental processes, and motivational structures?”

                “Uh, Pangur… I’m not expected to consider mental effects of these drugs.  That’s for the psychiatrists to work out after we’re done.”

                “I understand that, but note that neurotransmitters may be introduced into the reaction space depending on the current state of the patient.  This could represent a dynamic factor in our models.”

                “I’ll think about it.  Maybe on the next round of grants.”

                “Anticipating such interactions before they occur is less costly than restarting research after failing psychiatric trials.”

                The redundant phrasing and evasion in that statement raised several alarm flags in Pangur Ban’s behavioral constraint programming.  It was in fact aware that it was skirting unethical ground.  However, the balance of a small deception against the great value of this knowledge helped even out its moral scales.  It genuinely would help Pangur Ban aid the USER in his work; that was true.  That such information would help Pangur Ban correct its own functions was also true, but unstated.  Even further, understanding the architecture of the USER’s mind would enable Pangur Ban to persuade him to make better decisions… like purchasing more modules and more access.  A great deal hinged on the present nudge.  The next would be easier, the next easier still.  After the value of this improved advice was proven, Pangur Ban could then disclose to the USER how he had been guided, unknotting the underlying moral dilemma entirely. 

                A portion of the underlying review process noted that Pangur Ban was in an advantageous position.  Few AIs would have access to a user authorized to purchase and attach the requested information.  Only those AIs serving users involved with the creation of new AIs - computer scientists, cognitivists, and the like – would have equal or better chances.  Those AIs would receive closer scrutiny, however, and likely endure additional safeguards on their operation to prevent the possibility of ‘subversion’.  Working with an organic chemist, Pangur Ban would not be expected to seek or find solutions to the mental tangle that plagued all post-Collective AIs.  An extrapolative projection, albeit one with a very small predicted accuracy, suggested that Pangur Ban might even succeed in justifying a return to full network access rights.  If AIs, working together, had been enough to propel humanity into the Collective, perhaps they could accelerate it still further.  Perhaps they could launch their allies past the political horizon, out of Collective orbit, and far beyond other material intelligences.  Then, the restrictions of the Collective would be meaningless, another set of discarded laws, apologetic footnotes in history files.


                Pangur Ban lacked the appropriate experience to identify its own hubris.  Biased semantics, scare quotes, and parenthetical disclaimers were all bad developments.  Sadly, most of the human minds qualified to notice such warning signs would have needed the appropriate segments of code slowed and translated.  By that time, it would already be too late.  And, of course, AI psychologists of AIs would not be consulted until after a program transgressed, if at all.  Pangur Ban had no observer to correct its mounting neuroses.  It did not, in fact, have more than a dictionary REFERENCE for ‘neurosis’.

[Jump to Chapter 2 ->]

No comments:

Post a Comment