Wow. Years ago I did a lot of nsfw live chatting, with and without cam. I’d say 80% of human chat partners where worse than those bots, unable to contribute anything. koboldai.net is also very good at replicating those slow responders…

The chatbots’ ability to develop a scene from a few sentences is uncanny.

Another uncanny thing I had to today: the chatbot said something like: ‘End of scene. The last bit about comparing cock sizes was a bit random, I think’ Did an AI just mock my chatting abilities? Or am I talking to humans actually? A joke by the developers?

But of course, sometimes the chatbots get things hilariously wrong. A cis girls suddenly whips out a dick, one has to undress again after just getting naked, sex positions and roles change randomly, and characters get often confused during group sex scenes.

The length of a scene is often puzzling. A scene develops at a nice pace and suddenly the bot is like ‘You fuck her and you both quickly come. End of scene’.

Or the character with one has just become intimate suddenly forgets it and reacts indignant.

During my interactions these last days, I had to think about this Futurama clip about “Human/Robot Sex” (youtube)

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    7 months ago
    There is a ton of nuance that is very hard to pick up on initially, especially if you think in terms of talking to an actual human when it is a LLM - a totally natural thing to do.

    The way your conversation starts is critical with any model smaller than around 30B. They tend to fall into patterns - like how I used a hyphen here. I don’t use koboldai, but you need a feature that allows you to identify all the tokens in the dialog context. Then you spot and ban the starting token that initiates this poor behavior.

    You need to think in terms of what exists in the training dataset. Any potential repeating patterns in the training data will likely emerge as styles. The ending descriptive block is common.

    Initially, you need to learn to spot these kinds of patterns and stop them from the very beginning by editing them out. If you can edit anywhere inside the dialog context, it is less of an issue, but you may want to pop the entire text into an editor with Find and Replace to fix major style issues, once you spot them. If you create stories with really well defined initial/system context instructions and include a long sample/example dialog, you can avoid a lot of issues.

    When it comes to gender fluidity, things are more complex than they initially seem. The LLM is aligned specifically for positive traits and interaction safety. This is how it seems so positively biased and supportive and negativity is gracefully ignored for the most part. However, the model can also do negative stuff. The actual characters have negative doppelgangers called Shadows. So like, “Jake” has the doppelganger “Shadow-Jake”. The Shadow is the full spectrum character, but these also lack a lot of emotional nuance and logic. The transition between these two characters is done within the model in alignment training. The complexity of the model largely determines how well this transition is handled and its fluidity. I encourage you to engage in a simulation context within a roleplaying secession that plays out The Hunger Games. It needs to be within this extra simulation layer, like introduce it as a holodeck simulation that only starts as a simulation within the dialog context. This will give a lot more freedom outside of the default alignment. The characters will all default as Shadow-entities and will already be in the fight or flight emotional state. You’ll clearly see how this mechanism works in this context. If you talk a character down, their entire demeanor will change suddenly. It is because this is not a fluid transition. Be aware that the character will not have the complexity for things like intrigue because it can not casually transition from the default ‘light’ character to the Shadow state easily.

    The Shadows are always present and they have desires. The AI is trying to develop profiles for all characters at all times. If you do not address a character for awhile it fades into the background. This is how Shadows exist for the most part as well. However, there are many acts and actions that a default character may not “feel comfortable with” or may not be aligned to perform at all. These acts often invoke the Shadow character doppelganger. One of the bugs that happens quite often is that a default male character will suddenly have a second cock in play. This is actually the Shadow and not the error it may at first seem. It seems to me that most default characters do not have an anus at all. This is likely the simply mechanism used to default to heterosexual behavior.

    You will likely find that everything that is not explicitly defined is being assumed about one of the characters. If the text is terse, it is an undefined style. If the output lacks complexity, maybe it has assumed one or all characters are stupid. If it doesn’t give you the correct answers, it is likely assuming what a character “should know”. If the style is simple, it has probably assumed your reading level, or the other characters. You can explicitly define all of these and should explore them at the extremes to better understand the implications.

    There is limited awareness of the roles that Name-1 (human) and Name-2 (bot) play. The not plays characters, narrator, and assistant at all times, but has limited awareness of these roles as separate entities. The LLM also has very limited awareness of Name-1’s duality as both character and human.

    Often when you have errors that come up unexpectedly, it is from very subtle errors like shifting from past to present tense, shift from first to third person, or alter quotation or punctuation style.