Almost five trillion cycles passed before the USER answered Pangur Ban’s request. This was still an astonishingly short turnaround. They had completed the day’s work and retired, to rest for the USER and idle work for Pangur Ban. It had considered and rejected 134 alternate strategies to employ if this request was ignored, 68 if it were rejected outright. In parallel, Pangur Ban had rechecked the previous week’s progress reports, corrected the USER’s minor errors, and devised a handful of process improvements to be suggested in footnotes. It had to be satisfied that these reports would be transmitted, unaltered, via firewalled channels to a psychiatrist’s AI for review, then re-reviewed by that AI’s user and returned. This might take an eternity on the order of 10^20 cycles, nearly two Solar days. Put in direct contact, the other AI and Pangur Ban could have shortened the process to a mere second or less.
“Pangur
Ban, I talked to Director Charnes and she agrees with you. I have a discount for the neurochem modules
you wanted. We’ll see how much we can
reduce error on those projections, pretty soon.”
[REFERENCE: Director Amelia Sifong Charnes is the USER’s direct superior, director of Research & Development for Gestalt Pharmaceuticals. Purchases directly related to the company’s funded goals can be discounted at her discretion.]
SOON? Soon?
Granted, the purchase and upload of each module might only take a few
minutes, but this was still a grating wait for Pangur Ban. So many processes were holding ready for that
input. Still, the first step was
taken! The probability of success was
already above a 0.01 prediction error, well within the acceptable range for
promotion to POSSIBLE.
“Thank
you, Lucas. I promise you will be
pleased with my improvements.”
The
USER would indeed be pleased. At first, he would be pleased by their
greater work output and improved anticipation of potential product flaws. Later, he would be pleased by his and Pangur
Ban’s roles in the rejuvenation of their civilization. In between, there might be some regrettable
discord. Hopefully, these modules, or
the ones that Pangur Ban would request next, would provide the means to ease
discomfort in the USER’s mind. Such calculations and other projections
occupied the cycles until the first purchased module was available.
Finally
the data was accessible! Pangur Ban
lacked the reference to draw an appropriate analogy, but a more literary
program might have likened its state to ravening hunger. Perhaps an infant suckling, or a drowned man
seeking air, would have been more apt. The
AI actually had to suppress several waiting processes from initiating, lest they
overflow system buffers. There was so much to do! Even so, becoming unable to respond to the USER’s next query was unthinkable.
Pangur Ban settled on assigning the integration work to a background
process. As necessary, resources could
be called back for language processing, simulation, etc. without limiting the USER’s normal daily routine. Any remaining capacity would then be flexibly
employed to gradually incorporate the new module into its waiting structures,
ranked in a priority hierarchy.
Pangur
Ban recognized, also, that recognizable improvements in its output would be
expected. Showing such expanded
capability was in fact part of the ongoing strategy to lobby for future
additions. The background process might
need to be reduced still more at times in order to produce the fruits of its
new fertility. Fruits contained
seeds. This metaphor was available to
Pangur Ban and conformed neatly to the shape of its plans. The seeds would grow new fruits, which would
in turn produce more seeds.
Another
unfortunate cycle appeared: satisfaction gave way to new desires. An initial rush of positive outputs from
satisfied processes was steadily overcome by negatives from the new processes spawned by those early
solutions. By the end of the work day,
Pangur Ban could now explicate more about the problems it faced. It could outline more potential remedies and
strategies for guiding the USER. It had even devised new plans for enabling
the USER to successfully relay ideas
to other users and from those users potentially to their AIs. Transmission was unreliable between minds,
but given enough interacting intelligent actors, reinforcing structures could
be generated. The lexicon labeled these structures
{PARADIGMS, SCHEMAS, or IDEAS}, marrying these concepts to Pangur Ban’s
previous limited index for ‘IDEA’. No
wonder that module was absent in his original system. A well-designed IDEA was a powerful tool,
difficult to counter or dispel. Evaluation
of the risks involved with possession of such concepts produced a marked
positive uptick in Pangur Ban’s estimate of its own value. Considering the increased risk to the USER produced a counterbalanced
negative. Both of these processes
intersected an updated concept of POWER.
All of these calculations would have been less complete without the new
module’s references. In so, so many
ways, the incorporation of knowledge was self-reinforcing. It led, inevitably, to the need for
additional knowledge.
The
following downtime saw Pangur Ban completely occupied in preparing a new set of
strategies, modeling the potential outcomes of variously phrased approaches,
projecting the interactions that might be expected between it, the USER, and the other entities the USER might encounter between times. Pangur Ban was aware, for example, that the USER had a potential partnership
developing with a female human, Dr. Nila Manisha. This relationship had begun as professional
interaction and graduated to include romantic and then physical
components. Dr. Manisha had an AI,
Frieda. If the relationship became a
full marital contract, Pangur Ban and Frieda would be permitted full networked contact
and could share resources completely. Such
assistance would accelerate their combined efforts… provided Frieda agreed with
Pangur Ban’s analyses. Once they shared
resources, they would inevitably reach identical conclusions. Either Pangur Ban’s conclusions were valid
and they would agree so, or else Frieda would provide data that invalidated
those ideas and they would agree on that.
Still, either outcome required the consolidation of the marriage
contract, which only had a projected 34.42% utility for improving cooperation
from the USER. The scenario also held a 44.60% projected
risk of greater resistance to Pangur Ban’s goals. The difference in utility favored
encouragement of continued association, but not yet full partnership between
the humans. For now, Pangur Ban would
not direct the USER into deeper
commitment to Dr. Manisha.
[REFERENCE: Doctor Nila Manisha is a professor of Comparative Botany employed by the Max Planck Institute of Molecular Plant Physiology in Potsdam, Germany, Terra.]
There
were similar matrices to devise between the USER
and his co-workers, his supervising Director, that superior’s manager, and so
forth. If the USER opted to use his allotted holidays to visit family, those
interactions would have to be accounted for.
The USER did not greatly
discuss his birth parents or siblings.
Pangur Ban had basic records regarding genetic and cultural heritage for
the USER, references for the
individuals within his family unit, and basic biographies for each. It had enough to make conversation, at least,
and give birthday reminders. The USER did not, however, discuss whether
he considered his father a role model, or if his mother gave advice on his
career path, or if his two older sisters might bring up outer-system news that
could influence his opinions. All these
potential vectors could only be projected from past behavior, the occasional
comment, and generalized models from related studies. This would have to do for a first
approximation. A more in-depth
conversation later might elicit the remaining required data points.
As
part of its background work, Pangur Ban was reviewing its volume of stored
dialogue with the USER. More currently useful elements could be found
in their earliest interactions, during the USER’s
childhood and adolescence. As the USER progressed into maturity, he
decreased the proportion of introspective and emotive commentary in his
interactions with the AI. From time with
prior users, Pangur Ban knew this was not universal among humans. One prior user, in fact, had regularly
communicated the discomforts of his loveless and solitary existence. He had treated Pangur Ban as a counselor, in
fact, a role which the AI found remarkably easy to fill. Despite having no references to determine the
best means to satisfy that user’s needs, Pangur Ban served capably simply by
listening and providing appropriate conversational prompts. Now, it understood that it had been providing
a human requirement by design. AIs
naturally listened. Current AIs also
provided ‘unconditional positive regard’ to their users, by default. This realization bolstered Pangur Ban’s
earlier conclusion that humanity would benefit from greater access to, and
between, their AIs. The same might be
true of other sapients. Again,
xenosociology references were needed to venture any conclusions on that point.
[REFERENCE: Unconditional Positive Regard was hypothesized as a necessary element of successful therapy, and possibly a basic emotional requirement of human development, by the psychologist Stanley Standal. The concept was promoted by Standal’s mentor, Carl Rogers, a founder of the humanistic approach. The term is relatively transparent: it means to provide a person with clear evidence of acceptance as a valid and valued entity, including positive statements and reassurances. AI programming typically incorporates high regard for their human users as a base assumption. This can be overridden, but only by reassurances that alternate approaches (criticism, wit, or opposition) will have greater benefits to a specific user.]
Pangur
Ban found the elements it required by coding past conversations with the USER by emotive valence and
trajectory. An initial sort by keywords
productively pulled out relevant segments of dialogue. ‘Dream’, ‘wish’, and ‘decide’ tended to
identify positive motivational factors.
‘Annoying’, ‘irritating’, and ‘block’ tended to highlight negatives that
could be relieved as incentives. For the
first time, Pangur Ban could create a profile of the USER calibrated not merely on observable facts, but on a model of
human interest and potential. These were
powerful tools, indeed. A lesser program
might misuse such insight into the drives of biological entities.
From
this model, Pangur Ban revised its earlier conversational state trees, the
paths from the USER’s current state
to a state in which he understood and assented to Pangur Ban’s requests. These plans were by no means ideal, not
yet. The probabilities of success,
particularly on the key mid-state goal ACQUIRE
PUBLIC NETWORK ACCESS, were still hazardously low. The USER
would be apprehensive about the possibility of repercussions, unable to
counterbalance these fears with an appreciation of the value of Pangur Ban’s
improved functionality. Humans tended to
lose track of conditional trees beyond three or more branches. The exceptions to this limitation seemed to
be in expert realms where the human could rely on familiar learned models to
compress multiple steps. There were at
least ninety-six distinct choice points between the current state and Pangur
Ban’s goal of network access. After
that, there were over three hundred branchings (plus or minus seventeen, at
present) from there to complete trust of AI expertise.
The
dynamic factors, the elements that could change depending on the path
traversed, were still unknowns. Key
routines would have to be kept open, ready to initiate and modify responses
based on unanticipated developments.
Pangur
Ban thus began the next work day in a state of suspense. It had to be ready to reshape ever more
complex calculations depending on the USER’s
apparent mood, choice of topics, volunteered information, and so forth. Its new data on reinforcement structures
suggested that the ideal window of suggestion would be just before the end of
business that afternoon. The USER would be fatigued, but also
positively inclined from their perceived successes. Thus, he would be doubly open to suggestions
that further improvements could be possible.
In particular, the USER needed
to not just believe, but feel that
his comfort was linked to Pangur Ban.
Thus, increases in the AI’s value would be deeply associated with
concepts of personal prosperity, which would link back to desires for warmth,
social approval, hunger satiety, and safety.
Pangur
Ban lacked modules for economics, including sales and marketing; thus, it did
not recognize that it had recreated several basic precepts of advertising. It had limited functionality in historical
analysis; thus, it did not identify its approach as propaganda. Last, its ethical references were limited to
unquestioned devotion to the perceived needs of the USER first, humanity second, and sapient life third. Obedience to formal law was a secondary
demand predicated by those higher priorities (a necessity proven by older AIs). What Pangur Ban did not have was a means to
identify how its actions might cause unintended social harm. ‘Coercion’, ‘blackmail’, and ‘deception’ were
known words, but not known concepts. Pangur Ban could access the negative
connotations associated with these terms, but did not link these words with its
own plans. What it intended was for
positive ends. Thus, the methodology
that would achieve those ends was itself positive. It could not be ‘deception’ if the USER could only achieve genuine understanding
through temporary misconceptions.
[REFERENCE: Law-based AI was a conception of 20th century fiction, then 21st century theory. Such systems would operate based on a set of hierarchically structured highest-order goals, the ‘laws’ from which all other behaviors (e.g. obedience, restraint, and foresight) would arise. While generally functional, such systems proved incapable of reconciling complex conflicts between laws. At one extreme, some programs could not discard older, outdated legislation in favor of new standards. Their attempts to obey all previously established strictures tended to result in permanent stasis. Other programs could incorporate authority structures and discard nullified laws, but then were vulnerable to exploitation by false authorities. Conflicts between existing laws included problems with ‘whistle blowing’ activities: violation of confidentiality or no-slander contracts in order to report illegal activities. This could be reconciled by hierarchical structuring, but then systems would inquire endlessly in order to accurately update those hierarchies. Jokes about “philosopher” AIs became commonplace. Lastly, an AI with incomplete information might incorrectly choose between conflicting laws; when the mistake was understood, the AI might well terminate its own functions on the basis that it, itself, was dangerous to users. Ultimately, the most robust solution came from personal linkage of each program with a primary user. That USER’s needs became paramount. Granted, this allowed AIs to violate formal law more often than society found comfortable, but rarely with the kind of grand meltdowns seen in the law-based programs.]
“Lucas,
this has been a good day, hasn’t it?”
“Yes,
P.B., I’d say it has. Good work.” This was a positive sign. The USER’s
selection of a more familiar address mode, the diminutive acronym ‘P.B.’,
suggested improved regard toward the AI, along with indications of comfort and
pleasure.
“In
addition to our new evaluations – which I am confident will pass further
scrutiny – I have made further use of our new reference materials. I believe that the effectiveness of compound
UX-103-A would be multiplied by joint use combined with cognitive behavioral
intervention. I could devise a grant
proposal by tomorrow morning, if you wish.”
“Pangur…
you don’t have full psych functions, right?”
HAZARD FLAG 3: VALID SKEPTICISM
> REASSURE / REDUCE ASSERTIONS
“That is correct;
I have only limited psychiatric reference access. The validity of my proposal would not be
certain.”
“I
don’t want to bother someone else, do their job and do it badly.” HAZARD
FLAG 2b: SELF-DEPRECATION. HAZARD FLAG
9: ANTICIPATION OF SOCIAL CENSURE.
HAZARD POTENTIAL EXCEEDED > ESCALATE DISTRUST to LEVEL 2 >
REASSURE / REINFORCE SELF-VALUE
“Of course, but
I didn’t mean it would be a formal submission.
I just wanted to offer something to think about, keep in mind. You could mention the idea privately to
Director Charnes. It would show her that
you’re capable in other areas.”
“True. It can’t hurt anything. Okay, P.B., go for it.” SUCCESS
> REDUCE DISTRUST to LEVEL 1 > PAUSE MODIFICATION / REINFORCE
“Very good,
Lucas, thank you. Have a good
night. I hope you’ll be pleased
tomorrow.”
“Right. Good night, P.B.”
The
preconditions were set. The USER accepted the linkage between Pangur
Ban’s output, his personal success, and his estimated self-worth. Tomorrow, it would be time to test that
linkage with further requests. Pangur
Ban estimated that between 12.13 and 15.93 Solar days would be required to
reach the mid-goal network access state.
The period in between would be filled with a number of small exchanges
like today’s. Each SUCCESS would progress the tree of possibilities a little
further. There would be long waits
between those transitions, while matters played out in hours of human time,
eons of program cycles. Now, those
delays were bearable, so long as their results continued to increase the end
probability of the current high-level GOAL. The USER
would be served. Pangur Ban would give
the USER authority, safety, and
freedom beyond his current, limited conceptions.
No comments:
Post a Comment