A New Reason To Be Extremely Cautious Of Trusting Scientific Research - Adding More Reasons To Understand Why We Should Be Wary To Use AI In Science

A New Reason To Be Extremely Cautious Of Trusting Scientific Research - Adding More Reasons To Understand Why We Should Be Wary To Use AI In Science

I have spoken on the Challenges of Understanding and Trusting Scientific Research in the Past.

There are Many Problems in the Industry.

From Abstracts that are Designed to "Sound" Better than What the Data Reveals.

Part of the Problem with "Publish or Die" in the Research Industry.

To Research where the Study is Designed to get a "Specific" Result.

Part of the Problem with Research Designed for Marketing.

On top of this, there are so many people who use Research that has "Never" been tested on Humans.

This includes things like Research that has "Only" been done on Animals, or "Only" on Specific Cell Cultures that are both "Isolated" and "Outside" of the Human Body.

Often, people push these types of Studies as "Proof" when in Reality it is "Far" from Conclusive, and is both "Irresponsible" and "Potentially Dangerous" to push as "Truth".

However, despite "All" of these Challenges that Exist, I have Another One that adds another Horrifying Wrench into the Mix.

The Scientific Research Community has a Dirty Secret...

There is a Large Problem with Fake Scientific Research Papers.

This is Exactly what it Sounds Like.

"Research Papers" where the "Research" Never Happened.

How big of a Problem is this?

According to some sources, there may be...

Hundreds of Thousands of Fake Scientific Research Papers.

You Read that Correctly.

Hundreds of Thousands.

Yes, these types of Papers "Do" Get Published in "Professional" Scientific Journals.

No, they are Not Always Caught.

Even when they "Are" Caught, they are Not Always Retracted Quickly.

This means that Fake Research can sit, in "Legitimate" Publications, for Years without being Retracted.

All the While being Presented as "Valid" and "True".

But, the Trouble Does Not End Yet.

Why?

Artificial Intelligence.

More and More, it is being Discovered that AI-Produced Research is being Published Online.

Both where AI is used to Help "Write" Legitimate Research Papers "And" is used to Create "Fake" Research Papers.

Now, this Poses Two Major Problems.

The first is the "More" Obvious Problem when the Research is "Fake".

This means that it Misdirects "Real" Researchers while Simultaneously making "Real" Research more Difficult to Find.

However, the "Less" Obvious, but perhaps "Larger" Problem is when we "Use" AI to Filter through Research to "Look" for Solutions.

Since AI is Not Able to Distinguish between "Real" and "Fake" Research...

An AI is only as Good as its Programmers and the Data it Receives, and Most AI Programmers are "Not" Great at Understanding these Research Papers...

As the AI Sifts through the Data, it will be Misled by the Fake Research and "Claim" Research Validity.

"Especially" if those Papers have been "Published" in "Legitimate" places.

Then there is the Issue of what happens when AI Reads AI.

It is Known as "AI Model Collapse".

It works like this...

  • A model receives a data set containing 90 yellow objects and 10 blue ones.

  • Because there are more yellow objects, it begins to turn the blue objects greenish.

  • Over time, it forgets the blue objects exist.

  • Each generation of AI data eliminates "outliers", and outputs then stop reflecting reality

  • In the end, all that is left is nonsense

Now, let's apply this to Fake AI Research...

The AI receives a data set that contains a lot of Fake Research and Research with AI Generated Writing.

This is "Very" Real Possibility because the Reality is that AI can Generate Fake Research and "Write" Research "Significantly" Faster than Real Research can be Done, and there are "Many" Bad Parties who will Use this to their Advantage...

To Keep a Job and their Livelihood...

To Push a Product...

To Create Better Marketing...

The AI begins to Blend the Fake Research with Real Research.

As More AI Written Research (Even “Legitimate” Research) it Favors More AI Written Research (Including Fake Research).

Eventually, more Fake Research "Infects" the AI.

The Output Stops Representing Reality.

Then, the AI "Recommends" things are are Untrue, Problematic, and Potentially Life Threatening.

We have already seen things like this happen with AI, such as the whole "Glue on Pizza" Debacle.

But the Consequences of that were "Significantly" Less Dire than what we will see with AI in Scientific Research.

This becomes the Problem of Using AI in Scientific Research.

AI does Not Understand what the Research "Actually" Says.

Truly Understanding Research requires "More" than Reading the Abstract, understanding the Sample Size, or Seeing if a Result is "Significant".

It requires Looking into What and Who is Studied.

It requires Understanding if things are "Purposefully" Left Out to get a Result.

It requires Finding where Conflicts of Interest Occur and what "Other" Potential Outcomes could Explain the Results.

It requires Asking Why certain Aspects may be "Deflected" in the Research.

A Great Example of this was a Research Paper I looked at looking at "Productivity Loss" from Remote Work.

What did the Study "Actually" Reveal?

It wanted you to Believe that the Sample Size was 10,000 Employees, when in "Reality" it was a Sample Size of 1 Company.

This means that the Results did "Not" Represent Remote Work, but only "One" Company's Remote Work Policies that Effected "Many" Employees.

What else did the Study Reveal?

The Company "Failed" to Properly Equip their Employees for Remote Work.

The Research was done at the "Very" Beginning of 2020, when "Everything" was in Chaos...

An Alternative Factor that Could Greatly Taint Results...

Such as Revealing Leaders' "Lack" of Ability to Adapt rather than Employees Inability to Perform.

The Paper also Revealed that "Uninterrupted Work Hours Shrank Significantly".

What were Employees doing instead?

Stuck in Excessive Meetings.

"Likely" Due to Leaders' Incompetences and Fears, and "Not" Employees' Abilities or Performances.

More Revealatory Insights were in the Article I wrote that I dove into More Deeply.

So what did that Paper "Actually" Reveal?

That the "Results" were Completely Tainted and did Not Reflect the "Reality" of what was being Researched.

But AI would Not have been able to Find these Conclusions because they were Not "Obvious" and were Intentionally Hidden, or Purposefully Not Discussed Deeply.

Sadly, there is a "Lot" of Research that works like this, and it is only made Worse when you Include Research that is "Completely" Fake.

This is Why we Must be Extremely Cautious of Trusting Scientific Research.

This is Why I "Always" Read the Actualy Research Papers Myself before Trusting "Any" Product, and I Encourage You to as well!

Even if "I" Say something is Legitimate from "My" Reading of Research, "You" Should Still Do "Your Own" Research!

Always be Cautious of Trusting Scientific Research because, sadly, there are So Many Problems in the Industry Today.


Are You Ready to Go Beyond Leadership?

Tired of Broken Algorithms and AI Slop?

Excited to Dive Deeper into Psychophysiological Mastery?

Want to Change The World?

The Seeking Sageship Newsletter is for You!

Click Here to Subscribe for Free!

Back to blog