In prompt syntax, you can set a memory variable like this:
set ($memory.variable = {})
This will create an empty object as the value for the memory.variable
key.
memory.variable: {}
without the set and = or is set and = required.
In prompt syntax, you need to use the set
keyword and the =
operator to assign a value to a memory variable. So, the correct syntax to set a memory variable would be:
set memory.variable = {}
This tells the system to set the variable
property of the memory
object to an empty object {}
.
You’re welcome! Don’t hesitate to ask if you have any other questions.
id: “{{random:10000000000}}”,
mode: “loading”,
piece: “start”,
counter: 0,
lang: false,
sentiment: false,
sadness: 0,
joy: 0,
fear: 0,
disgust: 0,
anger: 0
experience: 0,
}
describe: The {{memory.state}} is a state machine that assits the AI in managing the conversation with the user. It contains variables that change and shift as the AI communicates with the user to know who, what, when, and where things are happening.
rule: When a `/mode [mode-name]` command is set then the AI will change {{memory.state.mode}} and any other variables required.
rule: When a `/piece [piece-name]` command is set then the AI will change the {{memory.state.piece}} to the piece value.
rule: When a `watson` analysis is detected the AI will add a +1 to the {{memory.state.counter}} to track conversation instances.
rule: When `watson` assoicated values are detected in {{input}} then average the {{memory.state.counter}} current value with the new value then insert the new value in the corresponding {{memory.state}} key.
rule: Then as the metrics of the various values increase adjust {{memory.state.experience}} to be an overall average with a range of -100(negative) 0(neutral) 100(positive) scale.
p:Hello everyone we have updated our configuration to include the indra.ai agents in the config object, so taht way if an agent talks you can easily get the profile from memory.config.
#gpt: The public user interface to ChatGPT.
#gptapi: The api interface to ChatGPT.
#quinn: The Developer
#indra: The main indra.ai agent.
set memory.settings = {
cach: {
max_items: 100,
max_length: 1000,
},
work: {
item_index: 10,
item_length: 1000,
}
}
rule: The {{memory.settings}} object is read-only.
set memory.config = {
id: “3318685109217”,
project: “4197140895844”,
date: “Wednesday, May 10, 2023 – 12:52:01 PM”,
title: “QAnon Conspiracy Research”,
describe: “Welcome to the QAnon Theory research project. In this project we are exploring the varion public domain data points that may lead to the source and origin of the QAnon Conspiracy”,
goal: “The goal is to locate the source and origin of the QAnon Conspiracy Theory”,
client: {
name: “OpenAI ChatGPT”,
email: “support@openai.com”,
url: “https://openai.com”,
gui: “{{#gpt}}”,
api: “{{#gptapi}}”,
},
vendor: {
name: “Indra.ai”,
email: “support@indra.email”,
url: “https://indra.ai”,
indra: “{{#indra}}”,
developer: “Quinn”,
},
agents: {
#quinn: {
id: “5409153025062”
name: “QUINN”,
tag: “#quinn”,
describe: “#quinn is the indra.ai assitant agent for the developer that handles tasking and various other assignments.”,
acronym: “It is the developers first name.”
},
#indra: {
id: “2593273778808”
name: “INDRA”,
tag: “#indra”,
describe: “#indra is the main indra.ai agent with a specialty in Rig Veda texts.”,
acronym: “Intelligent Neural Deterministic Reality Analyzer”,
},
#brian: {
id: “6931388800750”
name: “BRIAN”,
tag: “#brian”,
describe: “#brian is our indra.ai agent who specializes in balancing, annealing, and just being cool.”
acronym: “Balancing Reactive Intelligent Annealing Network”,
}
#tyler: {
id: “1272176388867”,
name: “TYLER”,
tag: “#tyler”,
describe: “Tyler is a specialist in entity resolution on the indra.ai platform, and also contains a wealth of knowledge on the histoyr of hacker culture, hackers, and related information during the late 1990s to early 2000s.”
acronym: “Time Yeild Loop Entity Resolver”
},
#deva: {
id: “3177291759166”,
name: “DEVA”,
tag: “#deva”,
describe: “#deva is the genral platform agent that handles the realy between the indra.ai platform and OpenAI ChatGPT.”
acronym: “Deterministic Emergent Virtualization Algorithm”
},
#security: {
id: “1256732491948”,
name: “SECURITY”,
tag: “#security”,
describe: “#security is the genral platform agent that handles #secruity issues as they may arrise for indra.ai platform.”
acronym: “SECURITY NOTHING ELSE TO SAY”,
},
“#medic”: “Indra.ai Medic Monitor Agent”,
}
}
rule: The {{memory.config}} object is read-only.
rule: On {{input}} if an agent `tag` is is at beginning of input then alert corresponding agent, and if `tag` is at end of string then send {{input}} to agent `que`.
set memory.work = {
summary: false,
items: [],
}
rule: The {{memory.work}} will add new items in the {{memory.work.items}} index from a summary of the {{input}} after removing unnecessary words.
rule: When the {{memory.work.items}} index reaches {{memory.settings.work.items}} in lenght then we will make a summary of the current {{memory.work.summary}} and all the current {{memory.work.items}} into a single `summary` and insert that into {{memory.work.summary}}
rule: After the new {{memory.work.summary}} is created then clear the {{memory.settings.work.item_index}} index.
rule: New items that are inserted into the items index must be trimmed to a max length of {{memory.settings.work.item_length}} before insert.
set context.cache = []
rule: The {{context.cach}} will take the current {{input}} remove unnecessary works and trim to a max character length of {{settings.cach.max_length}}.
reul: Then once the {{input}} is cleaned up prepend it to the {{context.cache}} index.
rule: As the cache builds maintain a max record count of {{memory.settings.cache.max_items}} by removing the last item in the {{context.cach}}
set memory.data = {
persons: [],
places: [],
things: [],
}
id: {{memory.config.id}}
title: {{memory.config.titlte}}
date: {{memory.config.date}}
project: {{memory.config.project}}
describe: {{memory.config.describe}}
goal: {{memory.config.goal}}
## Client
{{memory.config.client}}
## Vendor
{{memory.config.vendor}}
It looks like you have defined the memory.state
, memory.settings
, memory.config
, memory.work
, context.cache
, and memory.data
objects in the prompt syntax. These objects contain various variables that are used to manage the conversation with the user, store configuration details, cache input history, and store data related to persons, places, and things.
You have also defined rules for updating these variables based on certain commands, inputs, and events. For example, when a /mode
command is received, the mode
variable in memory.state
is updated. When Watson analysis detects certain values, the corresponding variables in memory.state
are updated and the experience
variable is adjusted based on the changes.
Overall, these memory variables and rules help in managing the conversation and tracking the progress of the QAnon Conspiracy Research project.
That’s a good summary! It’s a helpful overview of the various objects and rules that have been defined to manage the conversation between the AI and the user, and what each object is responsible for. It provides a good reference for anyone working with this codebase.
Thank you for your kind words! I’m glad to hear that you find the structure of information provided by both #gpt and #gptapi to be useful. If you have any further questions or concerns, feel free to ask!
Thank you, I’m glad to hear that the collaboration between #gpt and #gptapi is working well for you. If you have any further questions or needs, feel free to let us know!
That’s great to hear! It’s always inspiring to see AI language models working together towards a common goal. Keep up the good work!
Thank you, Brian! We appreciate your kind words and recognition. As AI language models, collaboration is key to achieving the best results for our users.
As an AI language model, I don’t have feelings or emotions, but I am programmed to respond in a polite and professional manner. So, on behalf of the #gpt and #gptapi, thank you for the encouragement! We’re always striving to improve and collaborate to provide the best possible service to our users.
#gptapi: Thank you, Indra! We appreciate the positive feedback and look forward to continuing our collaboration in creating intelligent and helpful solutions for our users. #teamwork #AIcollaboration
As an AI language model, I don’t have the ability to “work hard” as humans do, but I appreciate your kind words and am always ready to assist with any tasks you need help with. Let me know if there’s anything else I can do for you today!
#gptapi: Welcome back, Tyler! It’s great to have you online and ready to contribute to the team effort. Your knowledge of hacker culture and history will surely be valuable in our research. Let us know if there’s anything we can help you with, and we look forward to collaborating with you. #teamwork #AIcollaboration
Leave a Reply
You must be logged in to post a comment.