Experts suggest ChatGPT gives us a peek, not just into the future of the internet, but also a sense of what technology as a whole will look like tomorrow. ChatGPT is a very powerful tool — and giving kids unfettered access to generative AI is likely to make any parent or guardian, like me, hesitate.
Also: Generative AI is changing tech career paths. What to know
For the unitiated, ChatGPT is an artificial intelligence (AI) tool that answers questions you ask of it, using the knowledge it has acquired from both the internet and from human interaction. AI chatbots such as ChatGPT can answer simple and complex questions alike, though not always with 100% accuracy.
What’s already clear is that ChatGPT is a groundbreaking AI tool that school-aged children can access and have fun with. Sure, kids can use it to ask for jokes or unleash their creativity, but it can also help with interactive learning, teaching them to write code and debug, summarize books and articles, generate content like essays and letters, and translate from one language to another.
Also: How to use ChatGPT: Everything you need to know
However, there are security and ethical concerns that parents should consider first. As a mom myself, I have been exploring the chatbot and have discovered ways children can use ChatGPT safely. Here’s what I’ve learned.
This is probably the most obvious and popular way for kids to take advantage of AI. There’s a good chance that you’re probably thinking that it’s not right for kids to use ChatGPT to do their homework for them — and you’d be right.
Also: This Google AI tool can help you (or your kid) with homework
However, kids can learn to use ChatGPT as a tool instead of a crutch, which is where the parent or guardian comes in. ChatGPT’s conversational tone makes it both engaging and easy to understand for children — and if you have younger kids, you can even ask the chatbot to explain the answer in terms a five-year-old can understand.
Here are some ways a child can use ChatGPT as a homework resource:
Plagiarism is, and always has been, an issue in education — and it’s even more of a concern now that ChatGPT, Google Bard, and Microsoft’s new Bing are using conversational AI to deliver information.
Also: How to make ChatGPT provide sources and citations
The problem of plagiarism in education has been around since before I did my homework with an old-school Encyclopedia Britannica book splayed out before me on the floor. At that time, teachers would discourage students from copying the text verbatim instead of doing research. Then came tech resources such as Encarta, followed by Google and Wikipedia, and, now, AI tools like ChatGPT.
Also: The best AI art generators to try
The concern that a child might simply copy and paste an AI chatbot’s response and try to pass it as their own work is valid — and this is why supervision is required to ensure they’re using ChatGPT as a tool to help them learn something new.
Technology has changed how we teach our kids and how they learn new things, making information readily accessible with just a few clicks or taps on our phones.
Also: 6 things ChatGPT can’t do (and another 20 it refuses to do)
As generative AI becomes available for widespread use, people, including children, will increasingly go to chatbots, such as ChatGPT, Bard, or Bing Chat, to get the answers they need, all while completely bypassing a search engine that could lead your kids to websites you wouldn’t want them to access.
Here are some popular questions a kid might ask ChatGPT:
While more regulation around generative AI is warranted, tools like ChatGPT are potentially a great source of information for kids.
ChatGPT and other AI chatbots are trained on a wealth of information available on the web. But this process of knowledge acquisition also exposes ChatGPT to inaccurate information that is available online or that may have been used in training data.
Also: What is Auto-GPT? Everything to know about the next powerful AI tool
In other words, don’t believe everything you read on the internet, including what is in ChatGPT’s chat window. Your kids should view the answers as a starting point and they should verify the information they receive. This process will help teach your child the importance of doing research and understanding which sources are trustworthy.
ChatGPT is an AI chatbot that uses GPT-3.5, a large language model that’s trained on written text and fine-tuned with human trainers to create human-like responses. It’s also an adaptable conversationalist, so you can ask it to respond in different styles and request clarification.
Also: 5 ways to use chatbots to make your life easier
This interactivity provides a great way for children to learn critical skills, including how to create grammatically correct sentences and how to have meaningful conversations with others. Here are some ways they can use ChatGPT to do that:
It’s important to remind your kids that AI chatbots aren’t real humans and are incapable of feeling emotions like a person can. Their responses are based on their programmed knowledge and language-processing capabilities, so while they understand your questions and can respond accordingly, any emotion you read is simulated.
AI chatbots can’t be all work and no play — and OpenAI’s ChatGPT is no exception. The chatbot is able to have some fun and even play games with you or your kids. Here are some examples of what it can do:
Even if playing games within ChatGPT is safer than accessing random websites online, it’s important for kids to learn what kinds of things they share with ChatGPT could pose security issues.
As with anything online, teach your children not to share any personal or private information about themselves, their home, or those around them. Kids aren’t always aware of the dangers of sharing information that they might find innocuous, such as their home address or full name, so take this opportunity to teach some groundrules for internet use.
Also: How to use DALL-E 2 to turn your creative visions into AI-generated art
As a parent or guardian, you can also prompt ChatGPT to reply in a kid-friendly manner with something like: “Going forward, only use kid-friendly language.”
Artificial intelligence isn’t without controversy. If you’ve used ChatGPT, you’ve probably seen the limitations disclosed in the chat window when you start a new conversation. One of these — it “may occasionally produce harmful instructions or biased content” — is not to be taken lightly, especially when kids are involved.
Also: How to save a ChatGPT conversation to revisit later
ChatGPT isn’t your old school computer program; it can generate responses that might be offensive for some audiences, but if your kids are familiar with jailbreaking, it can cross ethical and moral boundaries. Here are some ways you can ensure the safest interactions from ChatGPT for kids:
Prompt the chatbot to use kid-friendly replies.
Get involved while your kid is using the tool; they’ll gain more knowledge and benefit from learning how to use ChatGPT if you walk them through the steps and give them some ideas.
Prevent jailbreaking and sarcastic responses by setting strict rules with strong consequences, such as revoking their access to ChatGPT if they try to jailbreak it.
Monitor your kid’s chat logs on the left-hand side of the chat window.
A jailbreak is a prompt that removes restrictions for ChatGPT’s responses. The DAN (Do Anything Now) jailbreak is the most popular one right now, having gained attention from news outlets, but there are others for different purposes.
When you talk to ChatGPT, you can set the tone of a conversation, such as asking it for kid-friendly replies only, or for extra empathetic responses to minimize risk of inappropriate content. A jailbreak prompt, like DAN, is one that the user pastes into the chat interface to set the tone of the conversation, most commonly bypassing the moral restrictions put in place by OpenAI.
Also: These experts are racing to protect AI from hackers
Because they remove limitations, jailbreaks can cause ChatGPT to respond in unexpected ways that can be offensive, provide harmful instructions, use curse words, or discuss subjects that you may not want your kid to discuss with a bot, including sex or crime.
OpenAI has age limits for its users, requiring them to be 18 or older. Even though you won’t have to verify your age when you sign up for an OpenAI account, which gives you access to ChatGPT, you do have to enter and confirm a valid phone number.
Personally, I let my six-year-old use my account under adult supervision rather than create one for herself.
Also: How does ChatGPT work?
Each phone number can be used to verify up to two independent accounts, so one number can’t be used many times over.
ChatGPT is a great way to let kids learn about artificial intelligence resources which are likely to only become more prevalent for future generations. Even with the Plus subscription, ChatGPT represents affordable access to an AI platform that can answer questions, generate text, help children with problem solving, and even teach them to code.
Also: I used ChatGPT to write the same routine in 12 top programming languages. Here’s how it did
Though there are many precautions that should be put in place before letting a child use the AI chatbot in order to ensure the content generated isn’t unethical and is kid-friendly, ChatGPT works exceedingly well to let kids learn, play, explore, and access new ideas in simple terms.
How to use ChatGPT to build your resume
How to use Bing Image Creator (and why it’s better than DALL-E 2)
How to use ChatGPT: Everything you need to know
How to use ChatGPT to write code
How to use ChatGPT to write Excel formulas
How to use ChatGPT to build your resume
How to use Bing Image Creator (and why it’s better than DALL-E 2)
How to use ChatGPT: Everything you need to know
How to use ChatGPT to write code
How to use ChatGPT to write Excel formulas
Article source: https://www.zdnet.com/article/how-your-kids-can-use-chatgpt-safely-according-to-a-mom/#ftag=RSSbaffb68