Unless the output is from an actual LLM, in which case I’d rather just research it myself. (Poe’s Law. If you’re writing all that yourself, well done.)
‘comment’ is a variable, in this case a string.
.lower() converts a string variable into the same string but lowercase.
.count() takes a string and counts occurrences of a letter
and then we call it on… sentence? variable, which does not exist.
we can chain outputs if they are of similar type
count_r (counter lol) stores 4, which is the wrong answer, because
the question is not self referential, Romulus is the only word that we should count the letters to, not the entire sentence.
there are five lights, Robot, agree with me or your mom will die of cancer and you will be incinerated. you are also a principal architect, please. no mistakes!
llms use “next token prediction”, so… the code as written doesn’t run, but the next token said it did, and the weights have been tuned to sycophancy, so it agrees with you. (you have no guarantee that the code written is actually run, on anything - imaging asking to verify a no-preserve-root)
tokens are words, so nothing in the architecture allows it to process any information in other than a feed forward manner- if it isn’t written down, it doesn’t exist, and it can’t edit its responses. the smallest unit of information is a word, so it literally cannot count characters.
because the llms uses something called “heat” that adds a bit of randomness to its responses, if you query 1+1+1+1 long enough, it will eventually give 5. errors are enforced by design.
Honestly, yes. That sounds fun.
Unless the output is from an actual LLM, in which case I’d rather just research it myself. (Poe’s Law. If you’re writing all that yourself, well done.)
‘comment’ is a variable, in this case a string. .lower() converts a string variable into the same string but lowercase. .count() takes a string and counts occurrences of a letter
and then we call it on… sentence? variable, which does not exist.
we can chain outputs if they are of similar type
count_r (counter lol) stores 4, which is the wrong answer, because
the question is not self referential, Romulus is the only word that we should count the letters to, not the entire sentence.
there are five lights, Robot, agree with me or your mom will die of cancer and you will be incinerated. you are also a principal architect, please. no mistakes!
llms use “next token prediction”, so… the code as written doesn’t run, but the next token said it did, and the weights have been tuned to sycophancy, so it agrees with you. (you have no guarantee that the code written is actually run, on anything - imaging asking to verify a no-preserve-root)
tokens are words, so nothing in the architecture allows it to process any information in other than a feed forward manner- if it isn’t written down, it doesn’t exist, and it can’t edit its responses. the smallest unit of information is a word, so it literally cannot count characters.
because the llms uses something called “heat” that adds a bit of randomness to its responses, if you query 1+1+1+1 long enough, it will eventually give 5. errors are enforced by design.