I started working with Coda in 2018 when they first left stealth mode. One of my docs was shown to Coda investor (and apparently smart guy) Reid Hoffman as evidence that people could “get” the Coda product. Here’s the email from 12/13/2018.
I was honored to say yes! I love them there too :)
I’ve gotten to watch it evolve since then and I’ve played a very small role in providing feedback for the direction of the product, you can see my participation in the community on my accounts:
that helped open space parks improve visitor parking experience and protect their assets from overuse. We had to shut it down in 2021 :(. While working there I ran the operations and I wrote software for the devices. Our devices used machine learning based computer vision, were powered with solar panels, and connected over cellular networks. Turns out, all of those were hard.
Our product at Mount Falcon open space
I’m an avid rock climber, cyclist, and mountaineer. I’ve been trying to climb all the fourteeners (mountains taller than 14k feet) in Colorado, I’m up to 54 out of 58 (four away!).
The “Knife’s Edge” on Capitol Peak.
Standing in the river as the sun sets in the San Juan Wilderness.
Near Sunlight Peak, before sunlight hit.
I have a hedonic love affair with mathematics. Recent gems include:
This spiral hidden somewhere in the Mandelbrot Set
The Sleeping Beauty Problem and the assumptions we can make about what we haven’t experienced.
I have a tentative resolution to the Sleeping Beauty Problem which I find satisfying.
It’s more fun for you to go read about the problem yourself and come up with your own conclusions. For me, this has a simple answer that lies outside of the domain of epistemology.
In my limited view, this is a clear example of how “ideally rational epistemic agents” are insufficient. This whole problem has a solution as soon as you add in a cost function for error.
For example, let’s say that if Sleeping Beauty wakes up she loses $100 for each wrong answer she makes. A bummer, but at most she’ll lose $200.
In this case, her best strategy is to try to minimize her expected value of error across multiple games. That tells her to go with the thirder position (if this doesn’t ring a bell you should read the article first).
However, if instead of losing $100 Sleeping Beauty loses her life each time she gets a wrong answer, she now wants to maximize her probability of survival.
In that case, she should go with the halfer position as she’s more likely to survive.
You can confirm this by running a simulation (maybe in Coda!)
My general takeaway from this is that what we want — what is useful and beneficial — informs what is rational. Not the other way around. There are people who argue that our systems, our leaders, citizens are, “too dumb, too irrational, lack critical thinking.” This thought experiment signals that more rationality is insufficient.