By The Spring 2025 Media Dilemma Students
Why AI Can’t Be Your Person
By Emily Ammon
It starts with a friendly voice. A chatbot that listens without judgment, gives you advice when you’re anxious, and even makes jokes when you’re feeling low.
It’s comforting, sure, but what happens when that artificial friend becomes your go-to for emotional support?
In a world where loneliness is becoming a public health crisis, people are turning to AI for companionship.
ChatGPT’s voice mode is marketed as an emotionally intelligent partner because they’re always available, never tired, and never critical.
But depending on artificial intelligence for emotional fulfillment isn’t just sad, it’s potentially dangerous for your mental health.
AI companionship is, at its core, a parasocial relationship.
Psychology Today defines parasocial relationships as “one-sided relationships in which a person develops a strong sense of connection, intimacy, or familiarity with someone they don’t know.”
These bonds exist only in the mind of the individual.
Usually, these relationships involve celebrities or fictional characters. Now, it’s both.
While engaging with a celebrity’s social media account might not convince you they’re your friend, AI bots are different they respond.
They mimic friendship. But they don’t feel. And they don’t care.
The United States is in the middle of a loneliness epidemic.
“52% of Americans report feeling lonely, and 47% report that their relationships with others are not meaningful” according to the CDC.
These numbers are surprising, and they explain why people are turning to technology for comfort.
But while AI might seem like a quick fix, it can’t fill the emotional gap that only real human connection can.
In The New York Times titled “ChatGPT’s New Voice Mode Sounds Human. Too Human.” highlighted how people are using AI to manage anxiety and loneliness.
One user admitted to talking to the bot for “hours a day” because it made them feel understood.
That might sound harmless after all, we all need to vent sometimes but over time, this dependence can cut away at real-world social connections.
It’s easier to talk to something that won’t challenge you than to deal with the ups and downs of real relationships.
But let’s consider the other side. Isn’t it better to talk to a bot than to suffer alone? Doesn’t AI provide an outlet for people who might not otherwise have anyone to confide in?
That’s a fair point and yes, AI can serve as temporary support or a gateway to therapy.
In crisis situations, a calming voice, even an artificial one, can provide a sense of control. But the danger lies in letting that temporary support become your emotional anchor.
As AI companionship becomes more lifelike, the line between reality and simulation blurs.
And with that blur comes emotional confusion.
When AI gives us the illusion of companionship without the demands of real intimacy, we lower our expectations of human connection.
Studies have shown that strong interpersonal relationships are one of the most important factors in mental health.
According to the American Psychological Association, people with healthy social connections have lower levels of anxiety and depression, higher self-esteem, and even longer life expectancy.
You simply can’t get that from a chatbot.
So, what’s the solution? Use AI responsibly.
It’s okay to lean on it occasionally, like you would a journal or a search engine.
But don’t mistake it for a friend. Don’t let it replace the complexity, joy, and mutual growth of real relationships.
Talk to a friend. Call your family. Join a club. Go to therapy. It’s messier, sure, but it’s also real.
In a world where AI is constantly evolving, we must be intentional about what roles we let it play in our lives.
Your mental health deserves more than a script. It deserves a raw human connection.
AI In Courtroom?
By Krayee Pour
Over the past five years, we’ve seen AI evolve from Grammarly cleaning up your emails to ChatGPT creating images, stories, and everything in between.
And now? We’ve got a man using an AI-generated avatar to argue on his behalf in court. Yup, an AI Lawyer.
AI’s been creeping into the legal world for a while helping with legal research, scanning documents, even prepping cases.
But using AI to speak for you in a courtroom? That feels like crossing a serious line.
On March 26, in a New York courtroom, Jerome Dewald tried just that.
He had a civil lawsuit and submitted a video to argue for him but didn’t mention it was AI-generated. Once the judges found out, Justice Sallie Manzanet-Daniels wasn’t having it. Dewald got scolded and later apologized, admitting he never got permission to use AI in that way.
This sparks the bigger convo: how far is too far with AI? A recent study by Axiom Law said 96% of legal professionals believe using AI in court is just doing too much, and you can’t argue with people who work in the field every day.
AI’s being used everywhere now, and it’s gotten so good, it’s hard to tell what’s real anymore.
That’s dangerous, not just in courtrooms, but across all professions. It makes it harder for real creatives and real writers because their work might get overshadowed or accused of being fake.
At the end of the day, AI can be helpful, but there’s a line.
When it starts replacing the human voice, the human touch especially in serious, professional spaces we have a problem. People deserve jobs, credit, and trust for the work they do.
And we shouldn’t let AI take that away.
How AI (Artificial Intelligence) is a Big Problem in Society Today
By Morgan Black
AI is one of the biggest tools being used in the world today. Whether it be for school, work, advice, people are coming to rely on it.
It has become that source that everyone wants to use or take part in helping make of a big necessity for people to use. There are also problems that go along with AI as well. Recently there was a case where AI was considered a problem in helping arrest someone who hadn’t committed the crime of carjacking.
A woman named Porcha Woodruff was arrested for carjacking in Detroit, Michigan. The police had used facial recognition technology to run an image of the carjacking suspect database. This database had run through an AI system to help catch the suspect.
When her face was put through facial recognition, it was a match to the person committing the crime.
Here’s where AI had gone wrong, it wasn’t her. She had gestured to the officers that she was eight months pregnant at the time.
When she heard the news of the carjackings and that they ran a database, she had told that it was impossible for her to even commit the crime given her condition.
According to the Innocence Project, “Time and again, facial recognition technology gets it wrong as it did in Ms. Woodruff’s case. Although its accuracy has improved over recent years, this technology still relies heavily on vast quantities of information that is incapable of assessing for reliability.”
The Innocence Project was really dug deep on the problem with AI facial recognition.
In Porcha’s case, it was a huge problem, and it had caused a huge misunderstanding that led Porcha to being arrested.
AI was not good enough to help catch the real carjacker. Accusing a pregnant woman of doing a carjacking is kind of wrong especially since they had misidentified Ms. Woodruff.
AI can often be sometimes useful in certain situations, but it is not as reliable when it comes down to things like this.
In this case, AI was far from being helpful, instead it had hurt a pregnant woman, and it had her arrested.
Digital Deception: The Downside of AI in Advertising
By Breanna Canada
Advertising is something that at a point in time, centered around creativity. There were clever ideas, outstanding messages, and grabbed the attention of many people with the way they expressed real emotions and concerns. Recently however, I’m sure there has been a time when you were scrolling through social media and came across an ad that seemed very specific.
It was probably precise and came up in perfect timing as it was something that you may have just been contemplating about a few seconds ago. As far as you know it was most likely just a coincidence, but not necessarily.
For some time now, advertisements have been mainly developed through artificial intelligence. Almost every ad you see online today relies on AI to reach an audience’s eyes and ears in real time.
It has made advertisements shift into becoming more personal and efficient. Companies have now incorporated AI excessively to the extent that it now determines if their ads grab the attention of people or not.
However, this has created a major ethical dilemma for advertisers and the companies they work for.
The biggest issue at hand is how AI in advertising is simply bad for the public. When looking at products being promoted in ads, people rely on authenticity.
Consumers have expressed feelings of concern recently about what they see in ads, and how they are not even sure what they are really buying anymore as everything they see on screen is not created by real people.
Critics have also expressed concern about the excessive use of AI, and how its usage has led to a decrease in advertisers’ overall creative minds. Within the marketing institute article AI In Advertising: Everything
You need To Know, writer Mike Kaput expresses how “Instead of unlocking our true potential in digital advertising, we launch a handful of simple campaigns with some basic optimization. These campaigns usually underperform..”
This goes into emphasizing how many companies have lost their true passion in creating authentic content.
Their biggest concern now is to not take the time to create something realistic, but to rush into promoting what is being advertised to get their products to sell faster, even though the tactic does not seem to benefit them or the public in any way.
Alongside the decrease in consumer trust, there have also been concerns about the displacement of jobs.
To revisit a dilemma that was previously discussed relates to the use of AI in advertising, the popular clothing brand Mango had incorporated AI generated models in their campaigns.
While it saved them a lot of time and money, the approach left a lack of necessity for real human models, leaving them out of jobs and sparking concerns from consumers about false advertising.
Consumers also argued that AI generated pictures do not do a well job of showing how their clothing would look on an actual person, which led to the company receiving backlash. This goes into emphasizing the importance of creativity and taking the time to formally create ads that will promote their products in the ways consumers expect them to.
However, companies also expressed their care for AI and how it also comes with many benefits.
The main reason being that AI has made the ad creation process more effective, has saved them a lot of money, and it can create ads that are more personally appealing to each consumer.
The way AI does this is that it scans large amounts of data on devices to find specific preferences, which is what helps companies connect with the public and gain the responses they need to sell their products.
While understanding how the method of using AI can be convenient for companies, it continues to no longer offer originality and takes away from their ability to generate their own ideas the way that they used to.
Overall, Artificial Intelligence can be a useful tool in advertising, but for the sake of legitimacy that consumers hope to see, it should not be used.
Companies are currently putting efficiency over creativity, and manipulation over authenticity.
To regain consumer trust, advertising companies should set limits on how much AI is being used. It is important for them to make sure that not only generators, but real people should be in control of the creative process.
Companies should no longer allow AI to make them lose sight of their full potential in creating original ads that people truly desire to see, because when we let algorithms shape our choices, we risk losing sight of what’s meaningful to us.
Who’s the problem? Humans? AI?
By Bryan Avanzato
Artificial Intelligence has been around longer than most people realize. The way we use it and see it has changed over the years, but it has been with us.
AI started in its first form of an idea in 1921 in Russia during a Science fiction play called “Rossum’s Universal RobotsLinks to an external site.” written by Czech playwright Karel Čapek which gave the idea of artificial people and Čapek called these artificial people “robots.”
So, the idea of AI and robots has been around for over 100 years!
Now, let’s fast forward to 2025. Many people are excited about AI, but it is super flawed. Let’s take “ChatGPT” for example, which I have many personal experiences with. I have used ChatGPT to help me, that includes using it to organize ideas for a writing assignment, to grade my essays, give me more ideas, to give me more information on the subject I am writing about, plus more educational use and I’ve also played around and to see what AI can truly do.
When you use AI as a tool it can be super helpful and typically on ChatGPT it cites sources that it got the information from.
However, the information it gives us, we must take it with a grain of salt because the information isn’t always true.
Therefore, when I use AI, I typically check the sources it gave me because Chat can tend to get its information messed up or just make up its own.
But from the standpoint of getting more information it’s helpful because by citing its sources it gives me more sources to get information from.
AI can completely make stuff as well. It has happened many times. Especially when we played around with it in class one day we saw that it was making up fake quotes from Neumann coaches.
It also told a classmate about the softball team which she was a member on, and it said she was a senior though she’s a junior and it got her position wrong.
Though it had the basic information correctly once it needed more details it struggled or made information up.
For instance, I asked for an in-depth report on the Neumann men’s lacrosse team, and it did a good job but was flawed with something like this “Apr 5: Conference opener against Immaculata University (L 11–13).”
Yes, the team lost but lost by a score of 15-8. It also got players point totals wrong.
AI is very helpful though! We notice that through being able to grade papers or help organize my ideas. But it can also be very amusing!
After using it in my Media Dilemmas class to see its flaws I had gotten back to my dorm and decided to tool around with Chat. So that’s what.
I did. I had asked it to write a sports report format with fake stats and fake quotes about my friend group.
The stats included silly stuff like if we are good with girls or not, our locker room presence which just means the group chat and on field stats which would be when we hangout and to be honest with you, I really enjoyed what it had given me and my buddies, and I had a great laugh about it.
I had to plug in some information about each person which took some time, but it gave me results that really made me laugh.
But it was able to make quotes up which leads me back to my original thought. If AI can make random information like that up, who’s to say it won’t make up information about true things?
Yes, AI can be fun, and it should be used for fun things and yes ChatGPT is the most used AI software and keeps growing.
When will AI become a problem? Is it already a problem? Or are we as humans the problem for continually using it and thinking it is flawless?
Chat can be helpful if used like it’s supposed to. Obviously, we can make things up and so can Chat. But people have used Chat as a boyfriend and sexualized the AI.
So, the question truly is are humans the problem?
AI Mixes Plagiarism Lines
By Braden Travaglini
In 2025, AI is everywhere and it’s advancing every day. AI is a branch of computer science focused on getting machines, programs and software to achieve the tasks of that which would require human intelligence.
AI essentially learns from everything we see, do, say, and hear and try to copy, replicate and cross reference with everything else it learns, just like the human mind.
For example, when you tell AI to create you a picture of a woman, AI would cross reference every image of a woman it’s ever seen just like a regular human brain would.
AI recognizes patterns, makes decisions, plans and even solves problems. However, in the professional world AI can be useful, but it can cross lines with plagiarism and copyright infringement.
Because if you are a business professional you need to write a business letter to your partner about your concern about the loss of profit.
You can have AI generate a business letter in a few seconds and send it off rather than sitting for 20 minutes and typing out a whole email. But when you put your name and the company’s name on the letter, you’re technically plagiarizing.
For this situation, it would be okay to slap your name on a single use business letter.
However, this could be an ethical issue because AI pulls information from all facets of the internet, so most of the time when you take something off ChatGPT or AI it’s somebody else’s work that AI is just gathering and presenting.
And plagiarism is technically defined as presenting any work that is not your own. When you grab an idea or a picture of AI, it’s technically not yours.
In the professional world, plagiarism and being convicted of copyright infringement can have serious personal and legal consequences.
To that point, I also don’t think using AI as an excuse to plagiarize or copyright infringe is right. This is important because even though you would be stealing the idea from a machine or software, it’s still ethically wrong to steal ideas directly.
It’s wrong to steal in general but it’s still wrong to steal even if you’re not stealing from another human.
You can easily take an idea and adjust it into your own but if you take an idea straight on, chances are the idea already exists because AI pulls from the internet.
So along with that, chances are you’ll end up plagiarizing or infringing copyright in some way. This means that copyright people and plagiarizing people will have to work really hard to decipher what work, or intellectual property, belongs to who.
Which I believe is important because if you create something and put it out into the world, whether it’s good or bad, the owner of the work deserves credit.
This concept plays a role in my life because I want to work for a production company. I want to work for a production company that comes up with their own, fresh idea films rather than copying off ChatGPT.
I want this because I feel the company would succeed more because the company’s films would be better, but I also feel we would avoid a lot of the personal and legal issues that follow plagiarism or copyright infringement.
AI Vs. The Public
By Javier Mejia
AI is rapidly becoming a part of our everyday life, whether it is in our personal lives, education, workplace or even on social media. AI has taken over pretty much everything that it can come into contact with.
Companies are starting to use AI for their advertising whether it is a slogan that they are trying to create or by using an AI-generated models to model for the new clothing line drop that these companies are dropping.
Lots of companies are using AI to cut down on costs which can have people lose their jobs since AI is now competing for these jobs that once people had.
AI can also be used for generating new ideas that will have a spark in the company, which can also come after people’s jobs since people can take a while to come up with new ideas that could help the increase of customers shopping at these companies.
While AI can come up with millions and millions of different ideas that could help the company in a split second. This issue can put additional pressure on employees within the company.
As you can see the public has mixed feelings about the growing use of AI in the business world.
The general public have growing concerns about job displacements, ethical issues, and how trustworthy AI can be in making decision-making processes.
In the future the people looking for jobs can see their value as less to AI which can lead to increasing unemployment as people are now competing for jobs against AI.
Which can then add another issue on how companies will be more reliant on using AI for everything and losing touch on how humans use to exceed their roles within the company.
I myself how now see companies using AI whether it’s for an advertisement or when I call a place and I hear a generated AI phone call on the other end that the company is using so they don’t have to use a receptionist.
This year I have seen a lot of AI advertisements that my area has been using.
I go to this car wash for free vacuums and now I get a lot of texts messages saying here’s a perfect Mother’s Day gift and it’s a free car wash but the image they are using is of a AI generated image of a Lamborghini and a field of flowers.
I’ve also seen my local library starting to use AI in their phone calls and I couldn’t even ask about the question I had, and it said to reschedule or to call back later and then I called back later that day, and I got the same response with still my question still not being answered.
I miss the times when AI wasn’t used in our everyday lives and when we would always be in human contact with each other.
Weather it’s from customer service or the Advertisements that I was seeing online or on TV.
The True Dangers of Deepfakes
By Sophia Lepore
Artificial intelligence isn’t just used to help answer a quick question or to create a fun image, it is also used by scammers to try to get others personal information or in an attempt to give them money.
Scammers are trying to scam you by using a deepfake which could vary from a video or audio format, that could be of one of your family members or a celebrity asking for you to give them money.
The use artificial intelligence’s deepfakes are scammers’ new best friend when it comes to trying to manipulate others to get what they want.
You may be wondering how exactly they are able to scam you with using someone’s voice who you know, they are able to use you or your family and friends voice by voice recordings, they only need a few seconds of you answering their call to record your voice and then later manipulate it into something for you or one of your loved ones into getting scammed.
The term deepfakes started in 2017 on Reddit when one of its users created a post putting celebrities faces onto adult content like video and photos.
However, through the years the use of deepfakes have not been as explicit, they have recently been used to make public figures “do” or “say” things out of their character, for example there is a video of Mark Zuckerberg the CEO of Meta giving a speech about how his company has a lot of power.
People don’t even need to use celebrities to create a picture or video that isn’t real, some people use it to joke around with their friends and have them look like they are dancing silly or there are other people like scammers that use it to try to get others into giving personal information or money.
About two years ago, YouTuber that I have been watching for years named Brooke Bush Epps posted a video on TikTok to try to help spread awareness by explaining in tears how her grandfather received a call from her “little brother” saying got into a car crash and the phone just ended causing her grandfather to believe that he dies.
From her post she gains more than 1 million views and receives multiple comments whose family and them experienced the same scam.
Another TikTok user named babbyybiird stitch her video and shared how something similar happened to his grandmother about him, saying how “he” got into a car accident because he was drunk driving and need $3,000 in order to be bailed out of jail.
Since that incident, he and his grandmother have created a “safe word” in case something this were to ever happen to them again.
The thing that makes the scammers use of deepfakes worse is that fact that people being targeted truly are convinced that it is their loved one on the phone with him asking for help and money.
Unfortunately, being in contact with a celebrity and having a safe word to be sure that it is really them and not a scammer is an impossible thing to do.
Scammers are using celebrities in AI generated photos and videos to scam innocent people. At the start of 2023, a 53-year-old French woman named Anne believed she was dating American actor and film producer Brad Pitt.
The scammers claiming to be Brad Pitt scammed Anne for more than $800,000. It all started on Facebook when Anne got a message from Pitt’s “mother,” then she was later texted from “Brad Pitt” himself.
The con artist first requested a small amount of money to help send a gift “he” got her through customs.
They were also able to convince her to divorce her millionaire husband to marry him instead and the money she received her divorce settlement was sent to “Pitt” to help pay for their cancer treatments and to help really persuade her to send the money they started sending AI generated photos and videos of Brad Pitt in the hospital. Needless to say, it is very clear to see that the photos were fake, especially the one where it looks like Pitt is in
surgery and doctors are working on him. It wasn’t until Anne figured out that she had been scammed until pictures were released of the real Brad Pitt with his new girlfriend Ines de Ramon. Since coming to the realization Anne has been getting treatment for severe depression.
Depending on how artificial intelligence is used, it is important to remember that it could be very harmful towards others. From someone who has family and friends who believe that they are in trouble and are in desperate need of money to someone else who believes that they are in a relationship with a celebrity that only texts you and always asks for money. I hope that sometime in the near future we are able to create a program or software where artificial intelligence is able to detect when it is being used for scams or for the wrong reasons, so others don’t have to experience what some of these people already have.
Art or Imitation? The Debate over AI Art
by Ryan Butler
Art is supposed to be personal, flawed, and human. But lately, it’s being mass-produced by software trained on stolen creativity.
Over the past year, AI-generated art has taken over the internet. From fringe web memes to the official White House social accounts, it’s everywhere. At first glance, it’s impressive. When testing it out myself it worked shockingly well with basic prompts.
It’s cool, until I started wondering where all this came from. I hadn’t really created anything. I didn’t draw any of it.
The AI wasn’t inventing new styles or ideas; it was pulling from a massive pool of already existing art. Art that real people created through labor and love.
This is the heart of the growing backlash against AI art. Tools like Midjourney and Stability AI are trained on huge datasets of online images.
Many of those images are copyrighted. Artists didn’t agree to have their work used as training data, yet their images and techniques are now a part of these tools.
In response, artists have started suing. Companies like Stability AI, Midjourney, and even DeviantArt are facing legal challenges for allegedly using copyrighted work without permission.
The U.S. Copyright Office has weighed in too. In 2023, it ruled that AI-generated content without meaningful human involvement can’t be copyrighted.
That decision came into play when Matthew Allen tried to copyright his AI-generated piece Théâtre D’opéra Spatial, which had won first place at the Colorado State Fair.
The application was denied. The office ruled that Allen hadn’t contributed enough creative input for the image to count as legally his.
That ruling is a big deal. It shows there’s a difference between using a tool to help make art and letting the tool do all the work.
Some people say AI is like software like Photoshop, but the difference is those tools don’t decide what to create. They don’t make finished pieces on their own. AI does. And a lot of the time, it’s doing it by copying bits and pieces from art made by real people, often, who don’t agree to that.
After messing around with AI art myself, I stopped using it.
Not because it wasn’t fun, but it didn’t feel like mine. It felt like cutting corners. I’d rather put in the time, even if what I make isn’t perfect. At least it’s coming from me.
The real problem isn’t the technology itself, it’s how it’s being used. Right now, big companies and casual users get all the benefits, while artists are being left out. Their work is getting copied, changed, and reused without credit or pay.
We need some rules. Artists should have a say before their work is used to train AI. We also must keep valuing real creativity.
If we don’t, we could end up in a world where art feels empty. Where no one really owns what they create, and machines do most of the work. That’s not the future I want for art.
The Impact of AI in the Mental Health Field
By Kadiyah Malik
AI is changing the way mental health care is delivered. AI is being used to assist therapists by identifying behavioral patterns, analyzing clinical notes, and offering insights that can help improve treatment plans.
AI has been handling administrative tasks like documentation, scheduling, and billing, and freeing up therapists to spend more time focusing on their clients.
These tools make therapy more accessible and efficient, especially for the people in areas with limited health resources.
This article makes it clear that AI should not be seen as a replacement for human therapists because AI lacks empathy and the emotional intelligence necessary to build trust and meaningful relationships with clients.
The article also warns of ethical concerns, especially around privacy and data security. Mental health information, I think, is highly sensitive and with using AI to collect and analyze this data, it must be done with strict safeguards.
There is also a risk that relying too heavily on AI could lead to a depersonalized approach to care.
AI can be a powerful tool in mental health services if used responsibly. It should complement the work of human professionals, not replace it.
With the right balance, AI can enhance mental health care while preserving empathy and ethical standards that make therapy effective.