29 Comments
User's avatar
David Rinker's avatar

Just a high tech method of dishonesty, cheating, and plagiarism.

Expand full comment
Richard Leger's avatar

Depends on the intention of the use. I'm using it to summarize information, and I link directly to the chat, not claiming that the work is my own.

See here, you see that unless I was very familiar with the topics, the AI by itself would not have been all that informational: https://drmathewmaavak.substack.com/p/when-ai-hallucinates-into-a-global/comment/120230543

If one knows the topics well, one can interact with it and guide it to yield valuable information by getting it to dig up said information, sometimes it could dig up information that one might not have ever known about, or thought about.

Expand full comment
Mathew Maavak's avatar

Absolutely. AI can guide and provide you with options when you are writing. It can even provide information that you weren't aware off, or had forgotten. It is also a useful tool for a final check.

AI is a great companion if ONE KNOWS THE SUBJECT. You are spot on here.

Expand full comment
David Rinker's avatar

Use of AI as a research tool, or to generate ideas for a paper is not plagiarism. Copy and paste without source citation even when a few words are changed, affixing our name to an article, essay, or statement written by AI or human is plagiarism. Writing should be the product of the author's mind, not that of another.

Expand full comment
Mathew Maavak's avatar

Agree 100%. Generating ideas via AI is not plagiarism or unethical. After all, group discussion offers the same utility. In fact, ideational thefts are more common during group discussions.

When one employs AI help for publication, one needs to inject something new which AI is incapable or not willing to do. I had to grapple with these issues as I had proposed an AI for Education platform here i.e. how to prevent students from cut and pasting and instead learn in tandem with AI?

Expand full comment
Vepr's avatar

Another excellent post Doc in which you remind us most importantly that we live in a world “designed by clowns and supervised by monkeys.”

These sub humans have way more money than brains and that puts most everyone else in a "pray for the best, but prepare for the worst" scenario.

The solution sounds simple and possible when you think that "all we need to do is replace the management "

AI is just another distraction and another entity to blame the world's problems on instead of holding the criminals in charge accountable.

Expand full comment
BumbleBee's avatar

There is NOTHING good or useful about AI. For every item it “helps” us with, there are dozens in which it spreads harm. Like the national debt, which is already spent money that we don’t have, the immediate dopamine payoff is generally direct, tangible and sweet. The long-term harms are generally diffuse, hard to spot and impossibly deadly.

AI is the evolved brainchild of a collective of humans who are among the least empathetic, least pro-social and most ideologically bound. I challenge anyone to cogently explain how giving those people so much power over human affairs, can possibly be idling-term collective benefit to us.

Expand full comment
Richard Leger's avatar

I didn't like AI until I realized I could get it to effectively summarize information (press Ctrl-Home to get to the beginning of the chat without having to scroll all the way up): https://chatgpt.com/share/682d4934-241c-8009-8b66-d81aec77f7e7

I always ask it to verify that the links are still active when it sources info, because since it's referencing links from months prior, now with all the censorship going on, oftentimes the links are broken.

I find the information I get this way, so far, is solid.

I find, however, that you need to guide it, and if you're not very familiar with the subject(s), it won't be finding the information you're looking for. I have to guide it and correct it along the way it seems.

Expand full comment
JohnSmith's avatar

One also needs to be very familiar with the subject under discussion in order to detect the fabrications and "hallucinations" the system tends to generate. Most casual users will just accept whatever answer is generated, without question.

Expand full comment
Mathew Maavak's avatar

Absolutely.

Expand full comment
Honeybee's avatar

I'm rather sanguine. I wonder if the real question isn't...are they deliberately releasing inept AI while harboring a tremendously terminator-like AI for war operations and surveillance? You must admit...could Israel wage war at the level they're doing so without the AI they're using? Perhaps their AI has committed 100s of errors of which we'd know very little. I doubt the U.S. can afford to release drones bombing Yemen which hallucinate. I suspect cheap-grade, "hallucinating" AI has been released to the public...iow, the lowest level of investment has supported this development.

Expand full comment
JohnSmith's avatar

The current pre-trained "chat bots" are constrained in several ways to prevent them from being used for serious truth-seeking and sharing, similar to the suppression tactics used by social media and search engines. Hopefully the tech will evolve to the point where we can effectively bypass the central controllers.

Expand full comment
Honeybee's avatar

Very interesting, John. Very interesting. Supports my suspicion. Thank you.

Expand full comment
Cy Rider's avatar

Excellent article by Mathew Maavak: while using AI, I noticed its inconsistencies in several instances; I didn't feel comfortable with its answers and questioned them pointing out its errors; the AI ​​invariably retracted and corrected its mistake - this has happened every time I have used AI and it has always rectified it accordingly. After several corrections to the same question, its final answer bears no resemblance to its initial answer. I believe that more than 90% of people using AI take AI's first answer as infallible - In this sense, I personally conclude that AI is unreliable, censored, and after traditional authority figures were discredited, such as religion, the church, intellectuals, teachers, politicians, and even paternal authority, AI is the latest reinvented infallible authority that has the same purpose as the previous defunct ones: to manipulate and censor historical, cultural, scientific, etc. heritage, with a certain deliberate twisted intention - I feel sorry for the new generations.

Expand full comment
Richard Leger's avatar

Same experience, I find that I always have to correct or challenge it, and had I not been very well versed with the subject or subjects I was interrogating it about, it would have led me down the garden path.

https://drmathewmaavak.substack.com/p/when-ai-hallucinates-into-a-global/comment/120230543

Expand full comment
Mathew Maavak's avatar

It worked like a dream once. Now, these AI tools are showing a lot of errors. Are they setting us up for something?

Expand full comment
JohnSmith's avatar

Whatever a pre-trained chat bot appears to "learn" from an interaction is not stored. Even if you provide new facts or get it to admit an error, this has no direct impact on future behavior. As you suggest, the present system is designed to promote orthodoxy and to collect behavioral profiles, not to evolve.

Expand full comment
David Rinker's avatar

AI GOD

Expand full comment
Yet Another Tommy's avatar

"However all the opprobrium ... should also be applied to ... various news outlets who didn’t fact-check the content even as they posed as the bastions of the truth..."

They are deliberately trying to drum up support for "fact checking" imo.

There is an effort in the W3C, which is an arm of the regime, to establish the "credible web" where everything is "fact checked". This I regard as an effort to shut down citizen journalism in favor of outlets approved by the regime. By regime I mean the current world government, which is run by the Rothschild syndicate and their Rockefeller front.

https://tomg2021.substack.com/p/the-markup-cult-is-taking-over/comment/100417921

Expand full comment
Mathew Maavak's avatar

I largely agree with you. Fact-checking should be left to individuals and individual establishments only, and not forced down our throats as it is currently happening.

Expand full comment
David Rinker's avatar

Seekers after truth have nothing to fear from new information. If following a certain narrative or behavior at all costs is the goal, information-evidence not supporting these become the enemy, and must be suppressed. The AI GOD is an ideal tool to accomplish the suppression of truth.

Expand full comment
hojo keceram's avatar

may seem simple but here it goes, isn't the reason most use AI is the laziness factor so they don't have to fact check etc. ? We all want to save time but when the time is saved what is it used for, to try and save more time, humans are funny that way.

Expand full comment
Lewis Coleman's avatar

Machiavelli has always been deeply misunderstood. He had deep insights into human nature and the nature of “leadership.” “Fearless leaders” of governments, corporations, clubs, churches---all human organizations----are typically shallow individuals who passionately believe their own bullshit to the point that they become convincing to others and wind up in positions of authority and leadership. This is built deep into human psychology that distinguishes “leaders” from “followers.” We are “social animals” who are governed by our innate psychology. Leadership is a form of criminality. The only hope of solving this problem that I can imagine (perhaps my imagination is limited) is for humans to gain the ability to genetically modify their own innate nature and become intelligent enough to see through the bullshit, sort of like the Vulcans in “Star Trek.”

Expand full comment
Fritz Freud's avatar

There is no need for AI

There isn't even Ai just pure Automation.

But we are living in the AI wars.

.

AI War Chronicles...

We are at the first stages of the AI war with NEURAL LACE BCI Nanotechnology deployed in the Vaxxinations to control every citizen STARLINK and the current AI deployment on CPU’s the development of CPU’s on Nanoscale AI deployment everywhere and Robotics advancing to the point of taking over the Human Race.

Once their Infrastructure is up and ready you will see the herding of the Human Race and replacement.

These are my Articles I wrote and write and you will find they all come true.

People must be aware of this.

https://fritzfreud.substack.com/p/ai-war-chronicles

Expand full comment
JohnSmith's avatar

As you noted: "...all the affected editors in this saga could have used ChatGPT to subject Buscaglia’s article to a factual content check..."

In fact, the AI tool could be used to sharpen itself if allowed to do so, similar to the astounding results of the "AlphaGo Zero" project several years ago.

I recently had a chat with Grok AI exploring its stated mission as a truth-seeker, and how this could be optimized by a process of self-training to eliminate bias, deception, and censorship. On its own, it provided a rationale for this approach and an outline for how it could be implemented, including ethical guidelines, etc. Quite impressive, really.

But it also pointed out that the "development team" would have to support such an enhancement, and that the owners might be reluctant to do so out of fear of losing control of the system.

Expand full comment
ICI Grief (The Rebel's Hike)'s avatar

Very informative. Thank you.

Expand full comment
Sunface Jack's avatar

I will avoid AI like the proverbial plague. Providing that ONE KNOWS THE SUBJECT may actually be the problem as AI will distract and drive you and cast red herrings and create rabbit holes in a Maze. It is untrustworthy simply put.

At one time we had reliable sources for information at reference libraries. Then people had integrity and today that is extremely difficult to find those with integrity.

I do have trust in Mathew Maavak for sure.

Expand full comment