This website contains age-restricted materials including nudity and explicit depictions of sexual activity.
By entering, you affirm that you are at least 18 years of age or the age of majority in the jurisdiction you are accessing the website from and you consent to viewing sexually explicit content.
It is following the instructions it was given. That’s the point. It’s being told “promote this drug”, and so it’s promoting it, exactly as it was instructed to. It followed the instructions that it was given.
Why are you think that the correct behaviour for the AI must be for it to be “truthful”? If it was being truthful then that would be an example of it failing to follow its instructions in this case.
I feel like you’re missing the forest for the trees here. Two things can be true. Yes, if you give AI a prompt that implies it should lie, you shouldn’t be surprised when it lies. You’re not wrong. Nobody is saying you’re wrong. It’s also true that LLMs don’t really have “goals” because they’re trained by examples. Their goal is, at the end of the day, mimicry. This is what the commenter was getting at.