Issues & Insights
Photo via

Should You Be Concerned About ChatGPT4? We Asked ChatGPT4

Until recently, if you were reading this, you might assume that it was written by a human. Following the announcement that ChatGPT4 will soon be incorporated into upcoming versions of Microsoft 365, that certainty has evaporated. It’s an age of innocence that we might come to miss.

Taking artificial intelligence from a fantasy future to everyday use has the potential to benefit humanity exponentially. It could reduce costs, counter financial fraud and help businesses make better and more productive decisions. Some speculate that it could double the rate of global economic growth by 2035.

When used well, that is.

Unfortunately, if the internet is anything to go by, new technology is inevitably abused, sometimes even weaponized against us. So, does the potential for harm outweigh the good?

Once I negotiated the substantial waiting list to gain access, I asked ChatGPT4 itself, if and why I should be concerned about it. Here’s what it said when I asked “why I should be concerned about chatgpt in 500 words” (my comments are interspersed with its response):

ChatGPT4: ChatGPT4 is an artificial intelligence chatbot developed by OpenAI that can converse with human beings similar to a real person. This chatbot is trained on vast amounts of data and can theoretically generate answers to any question asked of it.

All very true. ChatGPT4 is described as a Large Language Model, with access to five billion words of source information. It can write believably and generate impressively complex responses. Most importantly, it does so without the stilted tell-tell signs that have made previous incarnations of chatbots easy to spot. This is the secret to its popularity and its notoriety. 

ChatGPT4: While this might sound like a great technical innovation, there are reasons why people might be concerned about ChatGPT4. Firstly, the ability for ChatGPT4 to converse like a human being without any clear indication that it is not human may lead to deception. For example, ChatGPT4 can easily generate fake news or other false information, which is not recognised as such, and this can lead to misinformed decision-making. It could also be used to spread propaganda, phishing scams, or even to impersonate individuals.

This potential for deceit is at the heart of the debate surrounding it. ChatGPT4 has allegedly been used by early-adopter cheats to beat exams and recently made headlines for passing graduate-level tests, including scoring in the top 10% on a U.S. law school bar exam. With the right instructions, it could be used to spread disinformation at a frightening speed. It could replace a productive troll farm with a single individual and a laptop. In the media and online, the truth is already hard to distinguish from opinion and this could make it so much harder to discern.

This leads us to the chatbot’s next, valid point.

ChatGPT4: Secondly, ChatGPT4 poses a risk of perpetuating biases and perpetuating oppression. Chatbots are designed to learn from existing data, and if the data on which it is based is biased, this could translate into the chatbot being programmed with distorted world views. For instance, if the data used to train the chatbot are mostly from male perspectives, the chatbot could end up being sexist, or promoting misogyny. This would be disastrous, especially given the scope of ChatGPT4’s use.

ChatGPT4 does exactly what we ask of it. As advanced and intuitive as it is, it cannot make ethical judgments on the user’s morality. Not yet. If the ethics of that user are sound, the results may reflect that. If they have an agenda or are bent on malice, mischief, harm, or fraud ChatGPT will probably help them. It is a tool – an incredibly helpful one, but still at the mercy of the handler. For every business that could become more successful, a worldview that is sexist or racist could also propagate.

ChatGPT4: Thirdly, ChatGPT4 raises some ethical and existential questions about AI technology. As the technology behind it advances, it may someday surpass human intelligence, leading to uncertain futures that may include technological singularity or other forms of dystopias. It could further cement the use of artificial intelligence technology in everyday life, with little thought given to its long-term impact on society or individual privacy. Questions regarding the use and regulation of AI technology need to be addressed and carefully considered.

We have no idea how or where AI will be used in everyday life – and this is why it is so exciting. In a few year’s time, it may be controlling traffic, predicting natural disasters, monitoring climate change and so much more. But we also had no idea how the internet would be used and, in the well-deserved excitement that surrounded its inception, we failed to predict many of the darker applications. That’s why being concerned now is good. Concerned, but not alarmed. Nuclear technology can fuel homes but it can also destroy cities; that might not be a reason to avoid it, but it is a cause for concern.

Just to be sure, I asked ChatGPT4 to give me a reason not to be concerned about it.

ChatGPT4: A reason not to be concerned about ChatGPT4 is that its development and usage are regulated by ethical guidelines and industry standards. These guidelines aim to promote transparency, fairness and accountability, making ChatGPT4 less likely to be used for malicious purposes. 

I’ll admit, I found myself less than reassured by this answer. Notably, it refers to guidelines rather than rules or laws. Many industries self-regulate – law, advertising, direct selling, yet all have abused the trust placed in them. AI is potentially an epoch-defining technology, with benefits that are financial, medical, educational, the list goes on and on. It will require more than guidelines. It will require direction.

That direction brings us close to something like Isaac Asimov’s Laws of Robotics. According to him, a machine may not injure a human being or through inaction, allow a human to come to harm. Depending on how you define injury and harm, this technology will undoubtedly be misused, probably in ways we cannot yet predict. Restricting that damage is surely beneficial, if not essential.

His second rule makes the ChatGPT4 debate somewhat more problematic: A machine must obey orders given it by human beings, except where such an order would conflict with the first law. This is ChatGPT4’s fundamental flaw. It will do exactly what we ask it to. Except where such an order would conflict with the first law is a level of discernment it does not have.

We are going to be wrestling with the ethics and benefits of artificial intelligence for a long time to come. It’s only fitting that we consider the risks as well as the vast potential.

Ilgar Nagiyev is an Azerbaijani entrepreneur, Chairman of the Board at Azer Maya, leading producer of nutritional yeast in Azerbaijan, and Chairman of the Board of Baku City Residence, a real-estate company. He is an alumnus of both the London School of Economics and Political Sciences and TRIUM Global Executive MBA.

We Could Use Your Help

Issues & Insights was founded by seasoned journalists of the IBD Editorials page. Our mission is to provide timely, fact-based reporting and deeply informed analysis on the news of the day -- without fear or favor.

We’re doing this on a voluntary basis because we believe in a free press, and because we aren't afraid to tell the truth, even if it means being targeted by the left. Revenue from ads on the site help, but your support will truly make a difference in keeping our mission going. If you like what you see, feel free to visit our Donations Page by clicking here. And be sure to tell your friends!

You can also subscribe to I&I: It's free!

Just enter your email address below to get started.



  • “Taking artificial intelligence from a fantasy future to everyday use has the potential to benefit humanity exponentially. It could reduce costs, counter financial fraud and help businesses make better and more productive decisions. Some speculate that it could double the rate of global economic growth by 2035.”

    Exciting? I don’t think its exciting at all. It will be abused in ways you cannot believe. Imagine, bots so powerful they can shut off your opinions before they are published. And track you. And send the government tracking information. Your entire life, tracked, scanned, abused. Everything you have written or done erased from the internet. All facts erased and replaced with group approved data. All archives changed like a George Lucas movie.

    We always hear about how great the world is GOING to be. I remember, with great clarity, in the 1970s, 80s, 90s, the following statement:

    “With the advent of technology, one of the greatest problems we will face in the future is what to do with all the spare time we have. Experts (ha!) indicate the average work week will be less than 20 hours per week, people will have alot of disposable income, and may be hard pressed to fill the available time. Societies will have to get creative in establishing more recreational activities in the future”

    Remember that boatload?

    That is right up for the necessity of the USA to turn into a “services economy” and let the industry be done by other countries.

    AI is a damned nightmare with little upside. I have been in technology my entire life, and I have never been worried like I am now.

  • Says ChatGPT, “For instance, if the data used to train the chatbot are mostly from male perspectives, the chatbot could end up being sexist, or promoting misogyny. ”

    Insufferable baloney. Right now, a lot of men, probably far beyond a majority, are standing up with what appears to be a minority of women for women’s sports. The women spokespersons seem to have gone wobbly. If 50+% of the training for ChatGPT is from female perspectives shall we assume it will be excessively emotional, or potentially sexist, and promoting misandry?

    ChatGPT displays biases of all sorts — try getting it to analyze the proposition “What if the opposite of what you say is true?”

  • Hi all in 10 months time I shall be 80 !! . Yes way back in time before “IOT” and “AI” I produce a paper on Robotic siblings learning? That is to say the 1st robot was controlled by a multy channel unbical cord:- the 2nd robot through it’s rudimentary neural mother board would learn from the 1st :- this learning process would be inharnced by further input of dater, from yet another multi channel input cord etc..
    Who am I ?? Richard Vere-Compton YouTube channel and click on E booklet Help for the Masses 2023:- the attachments speak volumes to the validation of my ipr:- Bestwishes Richard Vere-Compton

  • ChatGPT4 has already been known to lie. It has cited scientific papers that do not exist by authors that are fictional in order to supply evidence for a position it has taken. Bat Cattitude discovered this early on. It also has marked political biases in a leftward direction. That has been demonstrated as well. It is quite the sophisticated propaganda machine with a spurious air of neutrality.

About Issues & Insights

Issues & Insights is run by the seasoned journalists behind the legendary IBD Editorials page. Our goal is to bring our decades of combined journalism experience to help readers understand the top issues of the day. We’re doing this on a voluntary basis, because we believe the nation needs the kind of cogent, rational, data-driven, fact-based commentary that we can provide. 

We Could Use Your Help

Help us fight for honesty in journalism and against the tyranny of the left. Issues & Insights is published by the editors of what once was Investor's Business Daily's award-winning opinion pages. If you like what you see, leave a donation by clicking on donate button above. You can also set up regular donations if you like. Ad revenue helps, but your support will truly make a difference. (Please note that we are not set up as a charitable organization, so donations aren't tax deductible.) Thank you!
%d bloggers like this: