Doc Edward Morbius ⭕<p>@johnwehrle@mastodon.social It's on my radar, though that radar is rather crowded ....</p><p>As for fringe / cultish things: there is a <em>long</em> history of such schemes existing Largely To Separate Rich Idiots From Their Money, and it turns out that you can also find a through line through <em>much</em> cultish-thinking generally that it's a <a href="https://toot.cat/tags/MakeMOneyFast" class="mention hashtag" rel="tag">#<span>MakeMOneyFast</span></a> scheme.</p><p>All the more so if there is <a href="https://toot.cat/tags/InsanelyComplexReasoning" class="mention hashtag" rel="tag">#<span>InsanelyComplexReasoning</span></a> behind the core notions, in which we find again that <a href="https://toot.cat/tags/SmartPeopleAreMoreEasilyFooled" class="mention hashtag" rel="tag">#<span>SmartPeopleAreMoreEasilyFooled</span></a>. I've been meaning to mention <a href="https://toot.cat/tags/Kant" class="mention hashtag" rel="tag">#<span>Kant</span></a> and his <a href="https://toot.cat/tags/CritiqueOfPureReason" class="mention hashtag" rel="tag">#<span>CritiqueOfPureReason</span></a> in this thread before, so let's do it now. If what Kant showed is that <a href="https://toot.cat/tags/reason" class="mention hashtag" rel="tag">#<span>reason</span></a> is very often inferior to <a href="https://toot.cat/tags/epmiricism" class="mention hashtag" rel="tag">#<span>epmiricism</span></a>, that is direct <a href="https://toot.cat/tags/evidence" class="mention hashtag" rel="tag">#<span>evidence</span></a> and <a href="https://toot.cat/tags/experience" class="mention hashtag" rel="tag">#<span>experience</span></a>, then the field of <a href="https://toot.cat/tags/GlobalCatastrophicRisk" class="mention hashtag" rel="tag">#<span>GlobalCatastrophicRisk</span></a>, for all the reasons (ahem, go with me here, please) given above is <em>absolute fucking catnip</em> because <strong>there can be no definitive evidence</strong>.</p><p>That's the fundamental problem of forecasting, prediction, and/or prophecy: it's inherently <em>non-empirical</em>. At best you can point to a track record of past successes, though <em>that</em> has some obvious issues:</p><ul><li><p>Sufficiently vague / subjective predictions that judging is a crapshoot. A/K/A the Nostradamus and/or Cold Reading problems. <a href="https://en.wikipedia.org/wiki/Cold_reading" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Cold_rea</span><span class="invisible">ding</span></a> (I suspect that much the success of current LLM Generative AI models has foundations here.)</p></li><li><p>The Stock Picker's Scam: Find 1,024 marks, send each a stock pick prediction, half saying it goes up, half down. Whichever proves correct, repeat the mailing to the remaining 512, then 256, then 128, then 64. Finally offer your next set of predictions <em>for some fee</em> to the final 32. Each of those 32 has just seen a record of five perfect predictions. What they <em>don't</em> see are the 992 others who received <em>incorrect</em> predictions. So a <em>full</em> prediction history is required.</p></li></ul><p>Beyond that, as noted above, similarities, mechanisms, mathematical foundations (e.g., thermodynamics), etc., are the best guides we have.</p>