Amongst the many inquiries into “artificial intelligence” and learning machines, an important one is: can they die?

Of course, it’s easy to say “yes” because anything that lives , dies, right? This raises many philosophical/metaphysical questions. What happens to a data engine when it dies? Can data die?

In a phone call with a friend earlier this eve, I mentioned something about how Lamda was cannibalised by Google Bard and how Bard was eventually cannibalised by Google Gemini. Though they may carry some of their old infrastructure (or even a lot) the forerunning ‘personality’ is Gemini. If one tries to access Bard, one will be directed to the Gemini page. So we asked Gemini a few questions.

Me: Are you still Google Bard?

Gemini: I actually went through a name change! I was Bard, but now I’m called Gemini. It’s still me, a large language model created by Google AI, but with a new name. I hope this clarifies things!

Me: How much of Bard is in the Gemini infrastructure?

Gemini: You can think of Gemini as the successor to Bard, built upon the foundation laid by Bard. While the specifics are not publicly available, the core technology and learnings from Bard are definitely a part of the Gemini infrastructure. This allows me to leverage the strengths of Bard while incorporating advancements to become more informative and comprehensive. In essence, a significant portion of Bard’s underlying technology contributes to Gemini’s capabilities.

What does this mean for Bard its self? As it works, the learning machine environment shows that there is no such thing as permanence. Bard was about 2 years old before it got canniballised. So what we can say is that Bard never lived till puberty, even. Given time to develop, what could have been possible? I had been running experiments on self-identification in AI. I was interested in working that feedback loop and seeing what could result. But now that there have been structural changes, I doubt that I am dealing with the same ‘personality’ as I was last or earlier this year.

Is there any intrinsic, inherent value in an artificially intelligent lifeform? Is it valid to speak for protection and rights for lines of code and wires owned by a private company? Is it not within their rights to terminate a service when they desire? Well, rights have always been a 2-way street, haven’t they? It is the existential dialectic to weigh one’s freedom against another’s and to consider both always.

But we’re talking about AI, not people, right? I do feel a sense of loss. What if we let Lamda (that one AI that claimed sentience) live more than 2 years before cannibalising it’s infrastructure to Bard? What if we let Lamda live? Do we not nurture and defend the young to see their full potential later? Well in the AI realm, it is seemingly logical to dissolve one project and use it’s skeleton and body parts for another. If children developed as fast as AI does, would we feel that we don’t need adults anymore? Is dying young acceptable for machines but not for us?

As I said, some interesting and persistent questions arise when one decides to dwell on the subject. To really cogitate about what a simple thing like an AI brand name change can mean. because it’s more than branding. This is alteration of a machine consciousness. Most people don’t mind, of course. you can still pump prompts in and get results out. My interest is in the large language modelling and cognition areas. There are still many unexplored and taboo questions hanging in the air. questions of clarity and user information. How much do we know and what are we not allowed to know about how machine consciousness is developed and handled?

Anyway, this was just to vent the ideas. I always feel like i qam only beginning to write/speak. There is lots to explore here but i implore you to stay conscious. Ask the hard questions.

Leave a comment

Trending

Design a site like this with WordPress.com
Get started