• Die4Ever@retrolemmy.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    This isn’t the first time ChatGPT has had this issue? ChatGPT supports “tools” which means the LLM can call to a traditionally programmed function to do things like search the Internet or do math, why don’t they just add a tool for it to be able to report/flag a chat? Everyone knows ChatGPT already isn’t private. The prompt could say to report/flag chats about suicide or illegal activity, I think the LLM would pretty reliably report them before they get very far.

    I guess this case started back in April though, so maybe they’ve already done this.

    I asked ChatGPT what tools it has, but it’s possible there are some it isn’t allowed to talk about.

    tools

    Yes — I support a number of built-in tools. Here is the complete list of tools available in this environment:

    All Available Tools

    1. web Search the internet or open webpages. Used for fresh, factual, time-sensitive information.

    2. python Run private Python code (not visible to the user). Used for internal reasoning, analysis, parsing, etc.

    3. python_user_visible Run Python code with visible output — charts, tables, files (PDF, DOCX, XLSX, etc.).

    4. canmore Create and update canvas documents (markdown, code, multi-file-style editing).

    5. bio Memory system: store or forget user information when requested.

    6. api_tool A file-system–like abstraction for discovering and invoking additional tools. (Meta-tool.)

    7. image_gen Generate or edit images based on descriptions.

    8. container Run commands inside a sandboxed shell-like environment.


    If you want, I can also describe what each tool is best for, or show examples of how they’re used.

    https://chatgpt.com/share/692f1cc7-3a80-8010-afa2-6576474fae38