TechllogTechllog
  • Entertainment
  • Life Hacks
  • Social Media
  • Technology
Facebook Twitter Instagram
TechllogTechllog
  • Entertainment
  • Life Hacks
  • Social Media
  • Technology
TechllogTechllog
Technology

The Key to Preventing Chat Gpt’s Destruction

Ankita BissyerBy Ankita BissyerDecember 21, 2022Updated:December 21, 2022No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
How to Stop ChatGPT from Going Off the Rail
Share
Facebook Twitter LinkedIn Pinterest Email

My First Thought when Wired Asked Me to Cover This Week’s Newsletter Was to Ask Chatgpt, OpenAI’s ubiquitous chatbot, for some ideas. This is how I’ve spent the past few days approaching my inbox, recipe box, and LinkedIn feed. While efficiency has plummeted, witty limericks about Elon Musk have increased by a factor of a thousand.

The results of the bot’s attempt to produce an essay about itself in the vein of Steven Levy were not encouraging. In terms of capturing Steven’s voice or offering anything novel, ChatGPT fell short. Like I said in my previous post, it was eloquent but not fully convincing.

The question of whether or not I could have gotten away with it did cross my mind. What kinds of monitoring systems are in place to identify when students or employees are inappropriately exploiting AI in their work?

I chatted with Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who writes and gives talks about the importance of incorporating algorithmic transparency and accountability, to find out. What could something like ChatGPT look like with that implemented? I queried.

Mr. Amit Katwala: The question of whether or not ChatGPT can aid in academic dishonesty has been a hot topic this week. Is there any way you could know whether one of your students plagiarised from it?

Said Sandra Wachter: From here on out, it’s going to be a game of cat and mouse. Although the technology isn’t quite there yet to fool me as a law professor, it might be convincing to someone who isn’t an expert in the field. I’m curious if, in the future, technology will advance to the point where it can fool me, too. Similar to how we have tools to identify deepfakes and detect digitally altered photographs, we may eventually require technological means to verify the authenticity of the content we’re viewing.

It seems more difficult to do this with word than with deepfaked images because there are less artefacts and telltale indications in language. It’s possible that the company responsible for creating the content will also be the only one capable of developing a solution that can be relied upon.

It is important to have the support of the person or people making the tool. However, I may not be the kind of business that agrees to do that if my clientele consists primarily of students. Furthermore, there is a possibility that watermarks, if applied, can be edited out.

In my opinion, the most tech-savvy communities will figure something out. There is, however, a technological instrument [developed with OpenAI’s help] that can tell you if anything is made artificially or not.

Read More- Microsoft Outlook’s “Dark Mode” and How to Activate or Disable It

If ChatGPT had been developed with harm reduction in mind, how would it differ from the current version?

Several things need to be mentioned. To start, I think it’s crucial that whoever is making these tools also includes watermarks. Maybe the planned AI Act in the European Union would help, too, since it addresses the issue of bot transparency by stating that users should always be able to tell when they are interacting with a simulation.

Read More-Gurman Reports Apple Delays M2 Mac Pro Release, Will Outsource Production, and More

However, businesses may be unwilling to do so, and it’s possible that watermarks can be eliminated. Therefore, it’s important to encourage study of standalone tools for evaluating AI results.

And in the realm of education, we need to come up with novel approaches to evaluating students’ work and composing research papers: what kind of questions can we ask that are less easily fabricated? Reducing disruption will require a mix of technological solutions and human supervision.

Premiere Video

THE MOST IN-DEMAND

During the second season of White Lotus, Haley Lu Richardson and Leo Woodall are seen seated on a bench near the water.

CULTURE

The Evocative 2022 White Lotus Scene

SIR AMOS BARSHAD

Photos of Elon Musk with cash and computer code

IDEAS

Wealthy people pose a national security risk.

VIDITH ASSAR

Russia’s ruined constructions

SECURITY

Russia’s urban areas are seeing disruptions in global positioning system signals.

Plot: MATT BURGESS

SCIENCE

When a quantum rule conflicts with our observable universe, physicists rewrite it.

Read More-The New Matter Standard and Its Potential to Solve All of Your Smart Home Problems

John Charlie Wood

You’ve put in a lot of time and effort studying counterfactuals, which are used to determine how an AI system arrived at a particular conclusion by exploring alternative outcomes given different inputs. To my surprise, I found that ChatGPT makes this possible with considerably less effort than other models.

It’s empowering for people to be able to interact with it and figure out what it’s doing and how smart or dumb it is. The lack of transparency and explanation for its actions makes you feel even less in control.

Is it concerning to you, as someone who tries to stop AI from doing damage, that ChatGPT has gained over a million members in a week? On the other hand, is it a bad thing that more people are getting familiar with AI in a relatively safe setting?

It is impossible to define whether technology is beneficial or harmful because it entails both. What you do with it is the deciding factor. As an outsider, I find it incredibly fascinating that the potential exists; the things that humans are capable of creating blow my mind.

On the other hand, it can be misused in harmful ways, such as lying, spreading false information, or intentionally injuring others. As far as I’m concerned, the technology is still agnostic in this case.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Ankita Bissyer
  • Twitter
  • LinkedIn

Ankita is a full time content editor at techllog, here she covers technology. In her free time, she loves to read and listen music.

Related Posts

A Group Chat on Your iPhone Won’t Close? What You Should Instead Do Is This.

March 22, 2023

The 16 Greatest Hd Movies Plus Substitutes.

March 20, 2023

Best Websites to Watch Free HD Movies Online for Indian People, According to JalshaMoviezHD.

March 20, 2023

The Greatest Website for Downloading Movies in 2022 Is 9kmovies. Win.

March 20, 2023
Add A Comment

Leave A Reply Cancel Reply

Recent Posts

  • A Group Chat on Your iPhone Won’t Close? What You Should Instead Do Is This.
  • The 16 Greatest Hd Movies Plus Substitutes.
  • Best Websites to Watch Free HD Movies Online for Indian People, According to JalshaMoviezHD.
  • The Greatest Website for Downloading Movies in 2022 Is 9kmovies. Win.
  • Top OlaMovies Alternatives For Downloading Films, TV Shows, and Online Series.

Categories

  • Cannabis
  • Entertainment
  • Life Hacks
  • Other
  • Social Media
  • Technology
  • Uncategorized
About Us
About Us

Welcome to Techllog, your ultimate source for all things technology. Since our inception in 2017, we have been dedicated to providing the latest and greatest information in the tech world to our readers.

Facebook Twitter Instagram Pinterest
Latest Posts

A Group Chat on Your iPhone Won’t Close? What You Should Instead Do Is This.

March 22, 2023

The 16 Greatest Hd Movies Plus Substitutes.

March 20, 2023

Best Websites to Watch Free HD Movies Online for Indian People, According to JalshaMoviezHD.

March 20, 2023
Connect With Us
© 2023 Techllog.com
  • About Us
  • Contact Us
  • Editorial & Standard Policy
  • Privacy Policy
  • Fact Checking Policy
  • Terms And Conditions
  • DMCA
  • Meet Our Team
  • Write For Us

Type above and press Enter to search. Press Esc to cancel.