Ourselves, Our Posterity
This issue addresses the good, the sad and the scary of artificial intelligence. For older adults, understanding AI is about more than keeping up—it's about modeling lifelong learning and adaptability
Artificial intelligence (AI) is a strong force in shaping the future. For older adults, understanding AI is about more than keeping up—it's about modeling lifelong learning and adaptability.
This aligns with the mission of Ourselves, Our Posterity: to keep growing and becoming the best versions of ourselves. We must set an example of curiosity and responsibility, inspiring younger people to navigate technology thoughtfully.

Whether we like it or not, we have engaged with AI, making it all the more critical to be proactive in leaving a legacy of critical thinking and adaptability. Therefore, we, too, can be a strong force in shaping the future.
Enjoy this AI edition, which I probably should have entitled “AI: The Good, the Sad and the Scary.”
Let’s start with the good!
AI Brings Visitors to St. Peter’s Basilica Closer to God
The wonders of AI “stitched together” four hundred thousand images to create an immersive virtual tour of the Vatican’s St. Peter’s Basilica. The project was a collaboration of Iconem, a company specializing in digitizing heritage sites, Microsoft, and the Vatican.
Visitors to the world’s largest church and burial place of St. Peter, according to Catholic tradition, will experience up-close tours through two immersive onsite exhibitions. Why participate in the artificial version when you’ve traveled to experience the real thing? The AI twin provides detailed views of areas typically inaccessible.
When looking up into the dome, it’s impossible to see each section. Virtual visitors, however, can travel the 240 feet up into the dome to see images undetectable from the floor. You’re up close and personal with the face of God as painted by Michelangelo, and then “you can look down on the basilica as God sees it.”
The project also contributes to preserving St. Peter’s cultural and spiritual significance for future generations. Brad Smith, Microsoft Vice Chair and President, described it as one of “the world’s most important and grandest buildings.” Already, structural damage and areas of deterioration previously undetectable have been identified.
The People’s Basilica, as it’s called, will open in December 2024. For those who can’t travel to the world’s smallest country, a 3D version can be experienced on the basilica’s website.
Sources: St. Peter's Basilica AI tour - Google Search and Vatican creates exhibit that will replicate St. Peter's Basilica using AI
Sad, Oh So Sad
Worst Ideas Ever? AI Companions
We’re prone to use tech to fill gaps. Perhaps it’s a Candy Crush addiction, mindless scrolling, or, in my case, too much TV. These bad habits get in the way of our relationships. But what happens when the relationship is with technology?
In 1966, MIT professor Joseph Wisenbaum built the first AI chatbox. He named it Eliza. It was primitive, especially compared to Siri or Alexa; however, it was convincing enough that “users thought there was a human on the other side of the computer.
Then came ALICE–Artificial Linguistic Internet Computer Entity–which is what Spike Jonze used to model the chatbox in the Academy Award-winning film Her.
Now, some chatboxes serve as companions and boyfriends. In “Meet the Women with AI Boyfriends,” women reveal that they have “deep discussions about life” and that “AI is emotionally available” as opposed to a human beau. Come on, ladies, haven’t you heard of the strong, silent type? One woman revealed she spent 40 to 50 hours a week speaking to her AI boyfriend. Huge red flag here about the emotional stability of this person! I don’t know if I spend 40 hours a month speaking to my husband. (I do love him, though.)

Sadder than the faux boyfriend is ElliQ, an AI companion designed specifically for older adults. According to their website, the monthly subscription offers medication reminders, conversations, and activities. Much of what it offers–exercise videos, games, travel videos—can be found free online, but I guess the message is Granny can’t be trusted to find reliable sources online.
I acknowledge there can be benefits for someone homebound and whose family lives in a different city. The sad part, just as with the young woman who talked to her make-believe boyfriend incessantly, is the replacement of human connection.
The moral here is to reach out to those older and younger, so they have compassionate human support. After all, here's the response when I asked ChatGPT about its feelings.
CHAT: I don't have personal feelings, so I don't experience emotions like being understood or misunderstood.
Sources: Steinberg, Julia, ”Meet the Women with AI Boyfriends,” November 15, 2024, The Free Press and ElliQ website
We’ll transition now to the scary.
Bias: A Very Real Issue with AI
One night, I started my business blog late, so my brain was fried. Then, aha, an idea came to me. Why not interview ChatGPT as if it’s a real person and write about what I learned? The interview started as amusing.
ME: ChatGPT, may I call you Chat?
CHAT: Certainly! You can call me Chat. Feel free to use that name when addressing me. How can I assist you today, Chat?
ME: No, I'm not Chat. You're Chat.
CHAT: Apologies for the confusion. You're correct. I am ChatGPT, and you can refer to me as Chat. I'm here to help and answer any questions you have. How can I assist you today?
The rest of the interview revealed interesting yet disturbing attributes of artificial intelligence. The comments are edited as ChatGPT can be chatty. I’ll be glad to provide you with the unedited interview. For now, here is one section.
ME: What are your weaknesses?
CHAT: Potential for biased or inaccurate information.
ME: Do you think AI is dangerous to humanity?
CHAT: The question of whether AI is dangerous to humanity is a complex and debated topic. AI, like any powerful technology, has the potential for both positive and negative impacts, depending on how it is developed, deployed, and used. Here are some perspectives to consider:
Ethical concerns: There are ethical considerations surrounding AI, such as privacy, bias, fairness, and transparency. If AI systems are not designed and governed with careful consideration of these factors, they can reinforce existing biases, infringe on privacy, or lead to unjust outcomes.
Thinking and Thoughtfulness
I’ll share more of the interview in the next edition, but right now, I want to jump on my “Let’s be aware of our biases” soap box. I’m convinced bias is one of the primary afflictions of today’s culture because it reflects a lack of thoughtfulness. Thoughtful regarding how we treat others and regarding thinking logically and rationally. Bias leads to misunderstandings, egomania, and disharmony. I’m biased. It’s a rare person who isn’t. The problem is not so much that we are biased; it’s what we do and don’t do with it.
“The tragedy is,” Adam Grant points out in Think Again, “we’re usually unaware of the resulting flaws in our thinking.” While ChatGPT can be a helpful tool, much like my GPS app, calculator, and Grammarly, I have concerns. Chat acknowledges biases can be sustained and invigorated by its use. We humans already have so many biases. Binary bias occurs when we narrow down options or points of view to just two. Complex situations require solutions beyond this way or that way. “Desirability bias is seeing what we want to see. Confirmation bias is seeing what we expect to see.” These two combined can result in being unmotivated to change our minds. (Grant)
We’re also reminded in Think Again about how the advent of the internet was accompanied by wonderful expectations of being exposed to different points of view. Yet the internet has become a “weapon of misinformation and disinformation.” If you’ve ever played around with ChatGPT, you know it’s like a Google search on crystal meth. It quickly spurts out information. The question is, will its users take time to think, analyze, and do further research to avoid the bias trap?
Bye-Bye Bias
What can you do to counteract bias regardless of whether the source is ChatGPT or a human?
Read Think Again by Adam Grant. This book should be required reading.
Examine all information critically. (My husband says I’m a skeptic. I don’t believe him, though.)
Treat ChatGPT like any other source, and never rely on one source for news or other information. (If you don’t like to research, contact me. I love it!)
You must be willing to have your mind changed.
You can work on seeking different perspectives by engaging in actual conversations with live humans.
I’m not suffering from status quo bias. Change is inevitable. I’d tell my students, “The more I know, the more I know that I don’t know.” I know that Artificial Intelligence has much to offer, but the last thing we need in this world is to perpetuate bad ideas, misinformation, and unfounded assumptions. Be aware of your biases so you don’t give artificial intelligence like ChatGPT too much power.
This is a great segue for part of the next issue of Ourselves, Our Posterity when I share more about my conversation with ChatGPT and address the dangers of superintelligence.
Hey friends, if you enjoy the content I’m producing, please consider making a pledge to support my work. The next edition includes more intriguing information on AI.
Great article! Thank you for making this easy to understand!