What are your strategies when a MySQL/MariaDB database server grows to have too much traffic for a single host to handle, i.e. scaling CPU/RAM is not an option anymore? Do you deploy ProxySQL to start splitting the traffic according to some rule to two different hosts? What would the rule be, and how would you split the data? Has anyone migrated to TiDB? In that case, what was the strategy to detect if the SQL your app uses is fully compatible with TiDB?

  • kaotic@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    I work for a company that handles this In a few ways. We set up read replicas to handle large read queries. To offload the reads from the primary server. Data is replicated to the read replicas so reporting can be run from that server. And not add load to the primary server.

    The second approach is sharding. Sharding breaks a large table into smaller, more manageable chunks, distributing them across systems. This reduces the burden on any one server, improves performance, and enables scaling out as data or traffic increases.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      These are the most common approaches I’m aware of.

      The only other real alternative, without changing the underlying DB or architecture, is you could break up the database or archive data.

      Many times you don’t need old data and can ship it off to S3 or something similar. Additionally, you may have tables/data that are disconnected and could be broken off into two databases. Both of these tend to just push off the inevitable, but could buy time for a larger architectural change.

  • bremen15@feddit.org
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    That depends entirely on your load. A common approach is sharding. Often memcache can help, too.

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    A common approach is something like a UV 3000.

    Likely not what you want, but it is important to remember that there are ways to solve it with money.

    • Otto@programming.devOP
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      By UV 3000 you probably don’t mean the ultraviolet lamp that is the first page of Google is full of when searching with this term…? I doubt UV - whatever it is - is a common approach.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        I apologize for not providing a link. (https://en.m.wikipedia.org/wiki/Altix) I am not quickly finding specs.

        These were SGI Altix systems before HP bought SGI. Tightly integrated clusters such that they operated as a single NUMA space. They are/were often used to host databases with massive shared memory.

        The smaller systems had lower numbers, and older had numerically lower numbers. A UV 1000 wad two models previous to the UV 3000. The UV 100 was same generation as 1000, but smaller.

        If I recall correctly, the UV 100 had 3 TiB RAM. These are very old now, and only an example. The UV 3000 had way more RAM and CPUs.

        A modern single non-UV server maxed out can hit over 1 TiB (I have not spected one in a while). Expect the single server to cost over $20K, anything less means one is in the wrong section of the store.

        Edit: clarifying a point

        Edit2: Just checked and a single server can hit 3 TiB RAM with 128 Cores for around $53K. Put that $53K in comparison with employee time for any other solution.

  • oldfart@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Is it a single large table? Is it used by a single program you control or something like a web host with 1000 wordpress instances?