extranjero ([info]extranjero) rakstīja,
@ 2025-08-27 18:25:00

Previous Entry  Add to memories!  Tell a Friend!  Next Entry
Es tikai tagad apjautu, ka chatGPT lielākā problēma ir halucinācijas.

Šizofrēniju arī visvieglāk ir diagnosticēt, ja pacientam ir halucinācijas. :)

Fredijs par šo raksta: https://freddiedeboer.substack.com/p/llm-hallucination-seems-like-a-very


(Lasīt komentārus) - (Ierakstīt jaunu komentāru)


[info]extranjero
2025-08-28 08:19 (saite)
Šis ir ļoti būtiski:

A lot of people find this sort of repeated error hard to grasp; if ChatGPT produced the hallucinated source, why is it also capable of immediately telling that the source is fake once I prompt it? Why did the system that could tell if a source is fake give me a fake source in the first place? Why are they “smarter” on the second prompts than on the first, smart enough to identify their own previous answer?

Well the answer is because LLMs do not think. They do not reason. They are not conscious. There is no being there to notice this problem. LLMs are fundamentally extremely sophisticated next-character prediction engines. They use their immense datasets and their billions of parameters to do one thing: generate outputs that are statistically/algorithmically likely to be perceived to satisfy the input provided by the user.

(Atbildēt uz šo) (Diskusija)


[info]methodrone
2025-08-28 09:46 (saite)
Yup, es arii paprasiiju chatgpt un tas pastaastiija ka it just predicts texts that seem most likeable by the asker. Jameer cilveeki jauc to ar Google. Google is 100x superior in retrospect.

(Atbildēt uz šo) (Iepriekšējais)


(Lasīt komentārus) -

Neesi iežurnalējies. Iežurnalēties?