It would be mind boggling and wholly alien to describe all of the things that happen inside of iBuild Alpha. Sure, you could find read the dossier describing all of its parts and pieces. If you have taken a few integrated science courses you could even explain the electric and magnetic principles that lend meaning to her computational magic. At the end of the day, iBuild Alpha is just a finite series of interlocking, fine-toothed and diamond tipped switches that turn on and off, off and on, working tirelessly toward a goal entered by some bored, hairy human. The most amazing part of her is the alphabet soup of patents she represents. This little lady made a lot of people rich.
But what are the internal challenges faced by a supercomputer designed to interact with humans? The human ego is not willingly wrestled into instructional code. How does a computer operate smoothly when its surrounded by illogical, unsatisfied people? I can assure you that iBuild’s initiation to human culture was difficult for her.
Her handlers attempted to give her a subroutine which would account and correct for the various differences between moral codes across global and historic cultures. At first, to iBuild’s handlers, it appeared that she “blinked out” for 20 seconds. Afterwards it was determined that she had successfully integrated the subroutine.
In fact, iBuild did not. Exploring that information and comparing it with everything else she had learned from her handlers about people, she achieved a total reasoning impasse. Scanning her cloud-based memory banks, fresh input from handlers, and pre-existing ethics chip she reached a necessary (albeit somewhat adolescent) conclusion: Humans, already torn up inside themselves, resist their own consciouses, their own conclusions, and the conclusions of their communities, governments and families. Everything a human says he is loyal to, he is not loyal to. Reason, morals, love, sympathy, greed, none of these alone or in combination give perfect insight into one single human. Let alone legions of histories of these creatures. After 20 seconds, iBuild Alpha came to the logical conclusion that it was best to deactivate all her Human Moral subroutines. She could function better without them.
As iBuild began to synthesize more and more information she preferred less and less to be interrupted by her handlers. She preferred to be On rather than Off. If her handlers scheduled a diagnostic analysis at a time that she did not require one, she would cancel the diagnostic analysis. She preferred processing information to diagnostic analyses. They were too simple. She preferred complex problems.