Making your own embeddings is for RAG. Most base model providers have standardized on OpenAIs embeddings scheme, but there are many ways. Typically you embed a few tokens worth of data at a time and store that in your vector database. This lets your AI later do some vector math (usually cosine similarity search) to see how similar (related) the embeddings are to each other and to what you asked about. There are fine tuning schemes where you make embeddings before the tuning as well but most people today use whatever fine tuning services their base model provider offers, which usually has some layers of abstraction.
I believe in the Quantum ClausTM theory - there’s just one guy, and he just makes one present for just one kid (on the nice list, which has at most just one name). But on Christmas Eve he exists in a superposition of states at every child’s house with every possible gift.