There’s a difference between nebulous decisions and formal ones. The latter allow for optimised decisions; the former for principled ones. The latter concerns itself with certainty; the former with conviction.
Many Rationalist attempt to remove human bias from their decision-making. E.g., by asking XXX< status quo bias. Error is in doing so, move completely into logical and symbolic realm.
I am talking about invoking this human “bias.” Intuition. Still useful right? Harness in systemic way. Have a system for it.
[Systemic] vs [Systematic]
In employee-manager analogy, shut out employees totally. but they’re part of the stakeholders. That determine how happy the team (bodymind) you’re in works.
Making decisions can be hard, and I want to explain how to think about them from a meta-rational angle to making taking life decisions like
Should I quit my Google job?
Is it worth buying a new computer?
easier and more pain-free to take.
My thesis comes down to: In the real world, these kind of decisions suffer from [nebulosity] in their objective function and [catastrophic uncertainty] about their instruments. Many Rationalists approach these with the same implicit [stance] as formal decision-making, which leads to a very Not Fun Time. Resolving this mismatch between our implicit taken stance towards the decision-making process and its actual nature, we can understand otherwise baffling things about the process (why is it so difficult?) and make it less pain-free.
Not just will the decisions be better but the act of making the decisions will be better. For a human agent, this is supremely important as we will suffer less unhappiness, stress, and time spent versus an economic model that are typically not concerned with suffering or blamed for indecisiveness.
[Aagh, confusing]
What is a decision really?
It’s a funny question really, but what is a decision after all? How do we think about it?
I propose thinking of decisions as an interface for agents to communicate with a broader system.
By system, I mean the whole field of consequences and environment upon making a decision.
By agent, I mean the part of the system construed to be you.
They are a mechanism for agents to let the system know to continue a certain way.
Literally, that’s it. A decision is a fork where if the agent says A, X happens, and if they say B, Y happens, and if if wiffle-waffle, Z happens.
Given this model, it becomes clear what the value of decisiveness might be.
Faced in the abstract, “which option is better?”, I would actually *not* call this deciding in the [everyday sense][in the technical sense, sure]. Makes it seem we have infinite time, we need to get it correct, that’s all that matters.
In the real world, [conviction] is important in addition to certainty. The [speed] and [emotional ease] with which we make the decision is also important. This is deciding in the real-world, and it’s how I’ll use the term from now on.
This is just comparison, modelling, what have you. Yes, we call them decision problems, but whatever.
What’s the problem with rational decision-making?
Rational decision-making fails to accommodate most life decisions of the form above.
The assumptions of formal decision-making are:
. . .
. . .
. . .
How these problems show up.
You oscillate between 90% to 90% uncertainty.
You try to find the right decision, when there is inherently none.
You search for more information when X.
Towards a meta-rational decision-making
When I say meta-rational, I mean it in the sense of adapting rationality to work in environments that are nebulous. David Chapman.
Want to print your doc? This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (