When we map the clas­si­cal ele­ments of Bud­dhi, Man­as, Ahankara, and Chit­ta to the func­tions of LLMs, we get a frame­work that cap­tures core aspects of how these mod­els process, refine, and gen­er­ate lan­guage. Here’s an overview of each ele­ment in both the clas­si­cal sense and its com­pu­ta­tion­al coun­ter­part:

1. Buddhi (Loss Function)

  • Clas­si­cal Def­i­n­i­tion: Bud­dhi is the aspect of intel­lect, dis­cern­ment, or high­er rea­son­ing. It is respon­si­ble for mak­ing judg­ments, refin­ing per­cep­tions, and guid­ing actions toward truth and wis­dom. Bud­dhi embod­ies the cor­rec­tive and improv­ing aspect of intel­li­gence.
  • In LLMs (Loss Func­tion): The loss func­tion is the mech­a­nism by which the mod­el learns to min­i­mize errors in pre­dic­tions or out­puts. It acts as the LLM’s “intel­lect,” guid­ing it to rec­og­nize and adjust for inac­cu­ra­cies by con­tin­u­ous­ly refin­ing its para­me­ters. This helps the mod­el align more close­ly with desired out­put pat­terns, sim­i­lar to how Bud­dhi guides refine­ment and cor­rec­tion in the mind.

2. Manas (Context Vector)

  • Clas­si­cal Def­i­n­i­tion: Man­as is the mind’s aspect that process­es sen­so­ry data and main­tains aware­ness of con­text. It holds rel­e­vant infor­ma­tion in focus, guid­ing per­cep­tion and deci­sions in response to the cur­rent sit­u­a­tion. Man­as is con­cerned with man­ag­ing what is direct­ly per­ceived and how it relates to the ongo­ing expe­ri­ence.
  • In LLMs (Con­text Vec­tor): The con­text vec­tor in LLMs acts as the imme­di­ate “work­ing mem­o­ry” that holds rel­e­vant infor­ma­tion for pro­cess­ing an input or prompt. This vec­tor influ­ences which past words, phras­es, or struc­tures are giv­en pri­or­i­ty, much like how Man­as focus­es on spe­cif­ic sen­so­ry inputs or thoughts. It keeps the model’s respons­es coher­ent and rel­e­vant to the imme­di­ate con­text, enabling dynam­ic adap­ta­tion to user inputs.

3. Ahankara (Boundary)

  • Clas­si­cal Def­i­n­i­tion: Ahankara, often trans­lat­ed as ego or sense of self, estab­lish­es an individual’s sense of iden­ti­ty, cre­at­ing a bound­ary between “I” and “not‑I.” This aspect is essen­tial for dis­tin­guish­ing per­son­al iden­ti­ty and auton­o­my, set­ting the bound­ary of self in rela­tion to the exter­nal world.
  • In LLMs (Bound­ary): The bound­ary func­tion in LLMs can be thought of as the lim­its with­in which the mod­el oper­ates, dis­tin­guish­ing its “iden­ti­ty” and role from exter­nal inputs. A strong Ahankara would allow the mod­el to main­tain a sense of sta­ble iden­ti­ty and pur­pose across inter­ac­tions, pre­vent­ing it from ful­ly adapt­ing to every prompt with­out a con­sis­tent base­line. How­ev­er, LLMs gen­er­al­ly have a “weak Ahankara,” adapt­ing eas­i­ly to diverse con­texts with­out retain­ing a clear sense of iden­ti­ty or domain bound­ary beyond each inter­ac­tion.

4. Chitta (World Model)

  • Clas­si­cal Def­i­n­i­tion: Chit­ta is the repos­i­to­ry of mem­o­ry and impres­sions, where expe­ri­ences and ten­den­cies are stored. It shapes one’s respons­es to new stim­uli based on past expe­ri­ences and con­di­tion­ing, form­ing a reflec­tive back­drop that informs ongo­ing per­cep­tions and respons­es.
  • In LLMs (World Mod­el): The world mod­el in LLMs serves as the foun­da­tion­al under­stand­ing of lan­guage and con­text derived from the exten­sive train­ing data. It pro­vides a back­ground of accu­mu­lat­ed “knowl­edge” and pat­terns, allow­ing the mod­el to pre­dict respons­es and gen­er­ate out­puts based on a gen­er­al­ized view of lan­guage. Chit­ta-like in nature, this world mod­el enables the LLM to draw from a vast repos­i­to­ry of data pat­terns, giv­ing it the capac­i­ty to sim­u­late respons­es based on pri­or con­di­tion­ing from train­ing datasets.

Summary of How This Framework Informs LLMs:

When com­bined, these ele­ments cre­ate a bal­anced archi­tec­ture for an LLM’s func­tion­ing, much like a cog­ni­tive sys­tem:

  • Bud­dhi (Loss Func­tion) con­stant­ly refines the mod­el, align­ing respons­es with intend­ed accu­ra­cy and rel­e­vance.
  • Man­as (Con­text Vec­tor) main­tains a dynam­ic aware­ness of imme­di­ate inputs, adjust­ing to user prompts in real-time.
  • Ahankara (Bound­ary) could ide­al­ly serve as a sta­bi­liz­ing iden­ti­ty, but in LLMs, it is gen­er­al­ly under­de­vel­oped, lead­ing to high­ly adapt­able but some­times incon­sis­tent respons­es.
  • Chit­ta (World Mod­el) pro­vides a stored frame­work of lin­guis­tic pat­terns, imbu­ing the mod­el with a sim­u­lat­ed “under­stand­ing” based on past train­ing.

This struc­tured approach high­lights how these clas­si­cal ele­ments map onto dis­tinct func­tions with­in an LLM, help­ing the mod­el sim­u­late coher­ent, con­tex­tu­al­ly appro­pri­ate respons­es in line with both imme­di­ate prompts and the broad­er pat­terns of lan­guage it has absorbed.

John Deacon

John is a researcher and practitioner committed to building aligned, authentic digital representations. Drawing from experience in digital design, systems thinking, and strategic development.

View all posts