SDF Chatter
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
xhluca@lemmy.mlM to Natural Language Processing@lemmy.mlEnglish · 2 years ago

[R] From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

arxiv.org

external-link
message-square
0
fedilink
  • cross-posted to:
  • science@mander.xyz
  • technews@radiation.party
1
external-link

[R] From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

arxiv.org

xhluca@lemmy.mlM to Natural Language Processing@lemmy.mlEnglish · 2 years ago
message-square
0
fedilink
  • cross-posted to:
  • science@mander.xyz
  • technews@radiation.party
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
arxiv.org
external-link
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.

A paper by: Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum

Tweets: https://twitter.com/gabe_grand/status/1672285672332312576

alert-triangle
You must log in or register to comment.

Natural Language Processing@lemmy.ml

nlp@lemmy.ml

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !nlp@lemmy.ml

A community to discuss research, development and products relating to Natural Language Processing.

Please prefix each post with the appropriate tags:

  • [R]: Research paper
  • [D]: Discussion post
  • [P]: Project (your own or others)
  • [N]: News article
  • [B]: Blog post (your own or others)
  • [C]: Conferences and Workshops
  • [M]: Meta discussions
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 1 user / day
  • 1 user / week
  • 1 user / month
  • 1 user / 6 months
  • 1 local subscriber
  • 68 subscribers
  • 6 Posts
  • 0 Comments
  • Modlog
  • mods:
  • xhluca@lemmy.ml
  • BE: 0.19.8
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org