ChatGPT

It is quite impressive. Tools like this will revolutionize how we use the Internet. However, despite the fluency by which it assembles words, tools like ChatGPT have no actual understanding of what they are saying. Given the correct set of prompts it could easily tell you to drive your car off a cliff if that is the fastest way to reach your destination. Thus we must be very suspicious of whatever it tells us.

I see ChatGPT as a rather knowledgeable, but not infallible, college professor who is willing to discuss any topic under the sun with a fair amount of knowledge. How you use that is up to you.
 
I asked, "what are the benefits and dangers of AI tools like ChatGPT?"

It responded:

AI tools like ChatGPT have both benefits and potential dangers. Here are some of them:

Benefits:

  1. Increased efficiency: AI tools like ChatGPT can quickly process vast amounts of information and provide relevant responses, saving time and increasing productivity.
  2. Personalization: AI tools can be personalized to the user's needs and preferences, creating a more personalized experience for the user.
  3. Accessibility: AI tools can help people with disabilities access information and communicate more easily.
  4. 24/7 availability: AI tools can be available round the clock, providing support and assistance at any time.
Dangers:

  1. Bias: AI tools like ChatGPT can reflect the biases of the data used to train them, perpetuating existing societal biases and discrimination.
  2. Privacy concerns: AI tools can collect personal data, and the use of that data may not always be transparent to the user.
  3. Dependence: Over-reliance on AI tools can result in a lack of critical thinking skills and can limit creativity and innovation.
  4. Security risks: AI tools can be vulnerable to cyber-attacks and hacking, potentially leading to breaches of personal information.
It's essential to use AI tools responsibly and ensure that they're used in ways that are ethical and considerate of potential negative consequences.
 
This technology, though quite impressive, only seems to help with searching the internet with ease and with improving customer service chat bots.

The day they came up with a machine who could philosophize is the day I will admit that the technology is truly ground-breaking.
 
Another thing that strikes me: it seems that a lot of these computer scientists are obsessed with creating AI that can think like a human. It just seems kind of dumb. Wouldn't it be easier to take someone, an entity that is already human, and then try to improve his/her cognitive abilities so that there will be no need for AI/computers.
 
Another thing that strikes me: it seems that a lot of these computer scientists are obsessed with creating AI that can think like a human. It just seems kind of dumb. Wouldn't it be easier to take someone, an entity that is already human, and then try to improve his/her cognitive abilities so that there will be no need for AI/computers.
That's a nice idea but I don't think a human is capable of scouring millions of websites in a fraction of a second then digest it all and give a reasonable output (in written, verbal or picture form) in less than a minute.
The day they came up with a machine who could philosophize is the day I will admit that the technology is truly ground-breaking.
There's some debate whether they are sentient now or shortly could be.
 
Last edited:
That's a nice idea but I don't think a human is capable of scouring millions of websites in a fraction of a second then digest it all and give a reasonable output (in written, verbal or picture form) in less than a minute.
You can increase human's data storing capacity by tinkering with the genome. I don't know if it will be possible for the trans-human to be exactly like a computer, but the thing is nobody seems to be trying at all.


There's some debate whether they are sentient now.
It depends on your definition of sentient. I have heard from someone that currently, only carbon-based entities can be sentient. He went into some details about the various properties of carbon and how silicone, even though it's in a similar group as carbon, does not come close to carbon in this regard.
 
You can increase human's data storing capacity by tinkering with the genome. I don't know if it will be possible for the trans-human to be exactly like a computer, but the thing is nobody seems to be trying at all.



It depends on your definition of sentient. I have heard from someone that currently, only carbon-based entities can be sentient. He went into some details about the various properties of carbon and how silicone, even though it's in a similar group as carbon, does not come close to carbon in this regard.
I thought "Sentient" meant that you were able to pick your nose without puncturing your brain.
 
You can increase human's data storing capacity by tinkering with the genome. I don't know if it will be possible for the trans-human to be exactly like a computer, but the thing is nobody seems to be trying at all.
If they succeeded, the question would be, is the ‘human’ actually human?
It depends on your definition of sentient. I have heard from someone that currently, only carbon-based entities can be sentient. He went into some details about the various properties of carbon and how silicone, even though it's in a similar group as carbon, does not come close to carbon in this regard.
I would think what would make human a human would be this...

Genesis 2:7 And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul.

...otherwise, ChatGPT is just glorified dust/silicone/sand.
 
I have lots of issues with ChatGPT and advanced AI in general.

In the mid 1980's (a long time ago in the infancy of AI) I worked on a couple of AI based expert system projects, which were really just a way of building, maintaining and looking through extensive databases. The AI tools coming out now are much more extensive, open ended and hard to evauate.

It's not only that ChatGPT seems to have problems staying within the lines, sometimes going off and adding irrelevant and wrong info, and making up references.

If one tells ChatCPT to write a business plan that maximizes profit while minimizing accountability, that is what they will produce.

If one tells ChatGPT to write a scientific treatise on time travel, and carefully ask it in ways that close off a truthful forthright response, one can conceivably get a plausible sounding paper that is fully hogwash.

I can see companies using this kind of tool to generate technical proposals that look great and sound convincing only to doom the project to inevitable failure.

High level AI will always be leveraging the interests of its providers not those of people in general.
 
It's essential to use AI tools responsibly and ensure that they're used in ways that are ethical and considerate of potential negative consequences.

The problem being that if one provides the controls required to ensure that ethical and considerate of potential negative consequences will require that the tool would have be closely guarded deep within a security screen and thus not really available to most persons.

Individuals and small organizations will never be given truly free access to these tools or the ethical and considerate standards cannot be maintained.
 
For those interested, here is a link to a recent interview with the man overseeing the development of ChatGPT, OpenAI CEO, Sam Altman.

I read this story last week and found it both fascinating, and a bit scary.

For those familiar with Arthur C. Clark, the future is here apparently. It is not hard to see the foundations of the benevolent tyranny of “HAL 9000” in this technology.

ABC news interview with Sam Altman

 
Any of the techy crowd care to chime in with their 2¢?
Benefits
Dangers
Observations
etc.

I asked, "what are the benefits and dangers of AI tools like ChatGPT?"

It responded:

AI tools like ChatGPT have both benefits and potential dangers. Here are some of them:

Benefits:

  1. Increased efficiency: AI tools like ChatGPT can quickly process vast amounts of information and provide relevant responses, saving time and increasing productivity.
  2. Personalization: AI tools can be personalized to the user's needs and preferences, creating a more personalized experience for the user.
  3. Accessibility: AI tools can help people with disabilities access information and communicate more easily.
  4. 24/7 availability: AI tools can be available round the clock, providing support and assistance at any time.
Dangers:

  1. Bias: AI tools like ChatGPT can reflect the biases of the data used to train them, perpetuating existing societal biases and discrimination.
  2. Privacy concerns: AI tools can collect personal data, and the use of that data may not always be transparent to the user.
  3. Dependence: Over-reliance on AI tools can result in a lack of critical thinking skills and can limit creativity and innovation.
  4. Security risks: AI tools can be vulnerable to cyber-attacks and hacking, potentially leading to breaches of personal information.
It's essential to use AI tools responsibly and ensure that they're used in ways that are ethical and considerate of potential negative consequences.

Another thing that strikes me: it seems that a lot of these computer scientists are obsessed with creating AI that can think like a human. It just seems kind of dumb. Wouldn't it be easier to take someone, an entity that is already human, and then try to improve his/her cognitive abilities so that there will be no need for AI/computers.
I have lots of issues with ChatGPT and advanced AI in general.

In the mid 1980's (a long time ago in the infancy of AI) I worked on a couple of AI based expert system projects, which were really just a way of building, maintaining and looking through extensive databases. The AI tools coming out now are much more extensive, open ended and hard to evauate.

It's not only that ChatGPT seems to have problems staying within the lines, sometimes going off and adding irrelevant and wrong info, and making up references.

If one tells ChatCPT to write a business plan that maximizes profit while minimizing accountability, that is what they will produce.

If one tells ChatGPT to write a scientific treatise on time travel, and carefully ask it in ways that close off a truthful forthright response, one can conceivably get a plausible sounding paper that is fully hogwash.

I can see companies using this kind of tool to generate technical proposals that look great and sound convincing only to doom the project to inevitable failure.

High level AI will always be leveraging the interests of its providers not those of people in general.

Hello crossnote, LearningToLetGo, Wan and Siloam;

I never heard of ChatGPT until this morning and am not qualified to give any kind of constructive input. I can share this;

ChatGPT was launched only 5 months ago and the results are far from revealing. We have resided in the The SF Bay Area / Silicon Valley for 34 years. In our community there is always a new idea with investments with lots of money in R&D, but
most of them fall short and go away.

Granted, the technical ideas that succeeded have long had a place in society for human life, health, education, the workplace, communication, etc...
Praise God.

Perhaps in time God may anoint AI to have some use and benefit for an area in society, but as I have preached before, nothing beats
human, "live" testimony, because God has empowered humans to gain knowledge and wisdom within ourselves.

God bless
you all.
 
Hello crossnote, LearningToLetGo, Wan and Siloam;

I never heard of ChatGPT until this morning and am not qualified to give any kind of constructive input. I can share this;

ChatGPT was launched only 5 months ago and the results are far from revealing. We have resided in the The SF Bay Area / Silicon Valley for 34 years. In our community there is always a new idea with investments with lots of money in R&D, but
most of them fall short and go away.

Granted, the technical ideas that succeeded have long had a place in society for human life, health, education, the workplace, communication, etc...
Praise God.

Perhaps in time God may anoint AI to have some use and benefit for an area in society, but as I have preached before, nothing beats
human, "live" testimony, because God has empowered humans to gain knowledge and wisdom within ourselves.

God bless
you all.
People who lazily use this AI to perform writing and research tasks on a regular basis may regret their choices. Whatever tools God has given us will atrophy when not used and exercised. That includes mental abilities. Thus, after a while, one could end up unable to clearly and critically think on important life issues.
 
People who lazily use this AI to perform writing and research tasks on a regular basis may regret their choices. Whatever tools God has given us will atrophy when not used and exercised. That includes mental abilities. Thus, after a while, one could end up unable to clearly and critically think on important life issues.
It is along these lines that I do daily math in my head, keep note book(s), use an actual checkbook, write with a fountain pen and, to the amazement of my younger coworkers and store clerks, I write in cursive.

Us it or lose it, this applies the mental and spiritual as well as the physical.
 
It is along these lines that I do daily math in my head, keep note book(s), use an actual checkbook, write with a fountain pen and, to the amazement of my younger coworkers and store clerks, I write in cursive. Us it or lose it, this applies the mental and spiritual as well as the physical.
Cursive? Is that one of those four letter words?. lol

Hey guys;

I remember learning cursive in 3rd grade. Today I use it when I write checks and sign my name. lol!
 
Back
Top