Šis ir ļoti būtiski:
A lot of people find this sort of repeated error hard to grasp; if ChatGPT produced the hallucinated source, why is it also capable of immediately telling that the source is fake once I prompt it? Why did the system that could tell if a source is fake give me a fake source in the first place? Why are they “smarter” on the second prompts than on the first, smart enough to identify their own previous answer?
Well the answer is because LLMs do not think. They do not reason. They are not conscious. There is no being there to notice this problem. LLMs are fundamentally extremely sophisticated next-character prediction engines. They use their immense datasets and their billions of parameters to do one thing: generate outputs that are statistically/algorithmically likely to be perceived to satisfy the input provided by the user.
(Lasīt komentārus)
Nopūsties: