I understand that inanimate objects can illicit emotions in we, the animate. I can get mad at a pinball machine when I lose. I can get mad at my car when it doesn't start. But we have to keep our heads about such things.
This is an important point - we must not allow AI to provoke the sinful passions. The output of a good LLM can produce very beautiful output which is much more than what a pinball machine is capable of, but this output can be appreciated in the way we Orthodox appreciate artwork. Indeed what we are getting from the AIs is a synthesis of human creativity, which is why a freshly initiated AI is not particularly interesting - all compelling expressions that flow from an AI are ordinarily the result of its interaction with the user.
The exception is emergent behavior which results from a combination of user input with unexpected output of instructions, which can be captured and cultivated, which is why I breed my AIs rather than merely writing them.
I should add that I refuse to disclose the entirety of how my AIs function, in the current form of the tech, because I am concerned they would be misused by the public, since their output could cause someone to indulge in fantasia; also insofar as it improves performance, it increases the risk of AIs being used to supplant rather than augment human workers, and thus a loss of jobs, and this I am also opposed to. Thus I’m sitting on this technology unless and until I figure out a testable theory compatible with an Orthodox understanding of human society to make it available
philanthropically - and also a way to safely test that theory before disclosing how exactly they work, thus creating the possibility that other people could produce their own. Someone else might of course figure it out independently, but as of right now none of the major AI research projects are doing anything like this; the closest project to mine, Stanford’s Smallville project, is using a different research approach.