Worker co-ops cause for a high risk portfolio. I’m putting my income in Stoxx 600 ETF. That way I’m being paid in company property while I don’t depend on the well being of one specific company.
Raising new capital is easier when you can sell a part of the company for it.
the 0,26 gini here in Belgium isn’t done by taxing capital but by taxing labour. We attract capital by having 0% capital gains tax on stocks/commodities/real estate/…
We attract talent because the capital here provides them with high paying jobs. We then tax the talent who is quite divided and conquered.
Far easier than trying to get the global leaders to tax capital equally.
Median net wealth in Belgium is 256k USD per adult. We’re a land of people that love to save up money.
🤷🏻♂️ what you gonna do about it? It’s more useful to me than talking to you.
"Large Language Models (LLMs) can produce different answers due to several factors:
Probabilistic Nature: LLMs generate responses based on a probability distribution. When you input a prompt, the model produces multiple possible answers and ranks them by likelihood. Even minor changes in the prompt can lead to different responses. This inherent randomness means that the same input can yield different outputs, especially if the model is set to a higher “temperature” parameter, which increases variability.
Training Data: The data used to train an LLM significantly influences its outputs. If two LLMs are trained on different datasets, they might have different “worldviews” or biases, leading to varied responses. For example, an LLM trained primarily on scientific literature might provide more technical answers compared to one trained on general web text.
Model Architecture and Parameters: Different LLMs have unique architectures, parameter settings, and training objectives. These differences can lead to variations in how they process and generate text. For instance, some models might be fine-tuned for specific tasks like translation or summarization, which can affect their responses.
Contextual Sensitivity: LLMs are sensitive to the context provided in the input prompt. The more specific and detailed the prompt, the more likely the model is to generate a relevant and accurate response. Vague or ambiguous prompts can lead to more varied and less predictable outputs.
Censorship and Bias: Some LLMs, like DeepSeek, are designed to avoid discussing certain topics due to censorship or political sensitivity. For example, DeepSeek avoids answering questions about the Tiananmen Square massacre or the treatment of Uyghurs in China, reflecting the biases and restrictions imposed by its training data and regulatory environment.
Subjective vs. Objective Evaluation: The evaluation of LLM responses can be subjective, depending on the criteria used. Metrics like accuracy, relevance, and coherence can vary based on the evaluator’s perspective. For example, a response might be considered accurate if it matches a predefined answer but subjective if it depends on the interpreter’s viewpoint.
In summary, the answers provided by LLMs are influenced by a combination of their training data, architectural design, probabilistic nature, and the specific context of the input prompts. These factors contribute to the variability and subjectivity of their responses."