In February this year, Google announced It was launching a “new AI system for scientists”. It said the system is a co -operation tool designed to help scientists “novel assumptions and research projects.”
It is too early to tell how useful this particular tool will be for scientists. But what is clear is that artificial intelligence (AI) is more commonly changing science.
Last year, for example, computer scientists won the Nobel Chemistry prize to produce an AI model to predict the form of every protein known to mankind. Chair of the Nobel Committee, Henner Link, Described the AI ​​system As a “50 -year -old dream”, which solved the notorious problem that eliminated scientists in the 1970s.
But when AI is allowing scientists to make technical achievements, which are otherwise far from reaching decades or fully, there is a deep side of the use of AI in science: scientific mismanagement is on the rise.
AI makes it easier to develop research
Educational papers can be withdrawn if their data or results are not found further. This data can be caused by fabrication, abuse or human error.
Paper Magazine is growing rapidlyPass 10,000 in 2023. The removable papers were cited more than 35,000 times.
A study After doubling the rate reported earlier, 8 % of the Dutch scientists acknowledged serious investigative fraud. Biochemical Paper Retections have been four times in the last 20 yearsThe majority due to mismanagement.
AI has the ability to worsen this problem even more.
For example, the availability and increasing potential of Generative AI programs such as Chat GPT has made it easier for research to make research.
It clearly showed two researchers who used AI 288 Full Fake Education Finance Papers Predicting stock returns.
Although it was an experience to show what is possible, it is not difficult to imagine what technology is technology Can be used To produce fictitious clinical trial data, edit the genes that modify the genes for other malicious purposes to hide the negative results.
Fake references and fabricated data
Already exist Many reported cases Of the AI-Influal Papers reaches to review and publication- just have to retreat based on the unknown use of AI, including some serious flaws such as fake references and deliberately fabricated data.
Some researchers are also using AI to review the work of their peers. Combination of scientific papers is one of the basic principles of scientific integrity. But it is also an incredibly timely demand, some scientists have spent hundreds of hours annually for unpaid wages. A Stanford -led study It has been found that 17 % of the coma’s reviews for high AI conferences were written by at least AI.
In extreme cases, AI can eliminate the writing papers, which is then reviewed by another AI.
This risk is increasing an already disturbing trend of one’s Increase in efficacy In the scientific post, the average amount of new and interesting content in each paper Is falling.
AI can also be unintentionally fabricated for scientific results.
A well -known issue of productive AI system is when they respond instead of saying that they do not know. It is known as “deception”.
We do not know to what extent AI deception is eliminated as mistakes in scientific papers. But a Recent studies Computer programming has found that 52 % of coding questions have errors, and human monitoring failed to fix them 39 % time.
Macadota/Shutter stock
To maximize benefits, minimize risks
Despite these disturbing developments, we should not encourage or discourage the use of AI by scientists.
AI offers important benefits to science. Researchers have used special AI models to solve scientific issues for many years. And Generative AI models such as Chat GPT presents the promise of AI scientific assistants for general purpose, who can work together with scientists to perform various tasks.
These can be AI models Powerful Lab Assistants. For example, CSIRO researchers are already developing AI lab robots with whom scientists can talk and can repeatedly instruct a human assistant to automatically automatically.
The new technology that interrupted will always have benefits and defects. The challenge of the science community is to keep the proper policies and guards in our place to ensure that we maximize the benefits and minimize the risks.
AI’s ability to change the world of science and help the world a better place in science is already proven. Now we have a choice.
Do we embrace AI by advocating and developing AI Code of Conduct, which implement the AI’s moral and responsible use in science? Or do we take a back -set and let the relatively small number of bullying actors discredit our fields and let us lose the opportunity?