This is not awes specific, but thinking is definitely a skill that needs to be honed for thinking correctly about awes as well.
My interpretation of the key points from the video…
1 Remember your priors. (Base rate neglect is a mistake.)
Ignored lessons from kite history need to be considered in AWES design.
2 Imagine you are wrong. Would the world look different?
Try to break everything you make. Find every weakness in your design and reconsider
3 Update Incramentally
For the best probability of success, we should explore more possibilities, learn from wider experimentation, and accept more evidence, to build a clearer, broader picture, of where our design horizon lies.
The strategy should point us the right way…
However, relying on probability is like relying on hope … never a good engineering strategy.
3 posts were split to a new topic: A question I like is. What can we do with enough kite?
Bayesian thinking should be a requirement for anyone doing (applied) science.
I think you want to maximize the number of observations you are able to make. That here needs a strong background in math, applied physics, cad, manufacturing, materials, and an endless list of other things. You can’t have that background in all things, so for this reason alone, besides all the others, you should ask for help.
I don’t have a strong background in anything, besides in how to think maybe, so besides trying to improve my foundational skills I just have to muddle through, incompetently.
This thread resonates with me:
This for example:
Since I don’t know the maths and physics yet, I have to solve everything by doing experiments. So all I am thinking about is, what are possible solutions to this problem, how can I decompose that solution in its constituent parts, and how can I test the assumptions I’ve made in this solution, preferably with string, paper, cardboard, and tape?
If you look over my thinking over the years, it’s an endless branching of ideas, dead ends, and new beginnings. And still I’m not sure the entire thing isn’t just another dead end, so I have to approach it like a hobby, like learning to play the piano or playing chess, it’s the journey that’s important, not the getting there.
In the yahoo group there was a quote:
Absence of evidence is not evidence of absence
I disagree. While absence of evidence might not be conclusive evidence of absence it is evidence.
The thing one must consider is: How likely is evidence of A in a world where A is true vs a world where A is false?
Example: Much increased activity of the secret service: I don’t expect evidence to be much more likely either way.
Example: The sun going out: I expect evidence to be definitely present and very unlikely if it didn’t. However the sun going out is so unlikely that someone tricking me into thinking it did might be just as likely.
Example: A new US president is elected. I expect there to be strong evidence if it did happen and barely any if it didn’t.
Example: A politican makes a claim that could be easily proven and to his/her advantage if done so at this time. Makes it unlikely that the claim is true.
That’s the kind of context I remember it being used in.
As argued here, if “absence of evidence” is in fact “evidence”, then “absence” is proven impossible. If there is nothing but evidence possible, then there is nothing left for evidence to point to, except itself. That’s circular logic.
The major challenge to new AWE analysts is not so much their modestly adequate logical powers, but their relative lack of applicable domain knowledge to reason heuristically over. The most rational course is to acquire the missing knowledge, rather than spend extra time on pure logic. Physics, aerospace, kite-sports, meteorology, mechanical engineering; these are some applicable knowledge fields to master to develop AWE.
As for “being less wrong”, in R&D the winner is often whoever was wrong the most, by trying more things and learning the most from more mistakes.
Baysean concepts are a start in probabilistic logic. Markovian logic is a next lesson. With enough such tools, the potential of modern AI emerges. But without domain knowledge, no amount of fancy logic counts. The logical ideal is a look-up table with a correct answer to every question, which requires minimal operational logic.