Using artificial intelligence (AI)

After years nobody is even able to predict if some AWES for electricity production can be viable, and what AWE method could be successful. There is progress in AWE R&D, but the time of utility-scale market is not still came. A lot of parameters shall be considered outside purely technical parameters.

Testing is a good thing, but trying all architectures with variants can take years and years. And now a lot of AWES are tested. So a finer analysis is required.

There is significant and growing data comprising tests and theoretic scientific studies. So modelling, using artificial intelligence (AI), could be a mean to define the outlines of a viable AWES.

Searchers can only explore part of AWE fields, but in compensation providing reliable and complete information on what they study, unlike discussions on AWE forums.

Let us imagine how AWE could progress as posts would argue with orders of magnitude, not even accurate data such as scientific papers.

Let us take an example on “scaling laws in AWES design” https://forum.awesystems.info/t/scaling-laws-in-awes-design/171: all the messages seem correct and interesting, but there are few given orders of magnitude. In the end the consensus is not easy. With a software comprising data enough the orders of magnitude could stand out automatically.

Collaborative projects like on Collaborative engineering projects could be improved by using AI as already evoked.

The technologic progress in China and some other countries comes from artificial intelligence for a large part.

Towards an “AWES developer” computerized.

China’s advantage in AI is they have platforms like Wechat with >1bn users per month
That’s the kinda hefty data set a trend spotting AI loves to chew over

I think the AI idea is viable if you can describe eg. soft and rigid wings (parameterized), tethers, cars/winches/flygens etc. Then just find the cheapest way to get x energy within certain constraints.

The question is: can it beat human intuition? My feeling is that for every day thinking about AWE, the «map» of good and bad options changes and hopefully gets more precice by the day. I think maybe the AI will not even get to the point of discovering eg: Yoyo and winch or Y tether.

The more structure you add, the easier time the AI will have. Eg if you state that you want a rotary rig, the AI could easily dimension tethers and wing size/count etc.

The latter is not as interesting at this point. Because IMHO we are not optimizing yet, rather just searching for plausible options.

1 Like

To me it is! I would love to give a generative design program a drawing and ask it to optimize it. That’s how you make optimal use of both the human and the software. The optimization also should tell you if the idea you had is viable.

The massive resources being put towards developing generalised AI now benefits from self evolving architectures.
The machine needed to solve an AI problem can be predicted from previously successful AI implementations.
According to http://feeds.soundcloud.com/stream/550131771-exponentialview-the-state-of-artificial.mp3
The speed at which a multi purpose machine can be retrained on new problems is going to surprise us.
I approached alphabet X a while ago suggesting the Makani design could be bettered by network kites. Maybe foolishly, I also suggested to them an AI could predict optimal kite network deployments. Really hope they tested at least 1 of those suggestions properly. Sadly, no feedback was offered.

There are two major currents in AI, historically, machines that learn for themselves, like neural nets, and others that are closely engineered, like expert systems. Neither of these, nor any hybrid, can yet resolve a problem like AWE unless it somehow has sufficient domain knowledge. How would an AI weigh critical factors like energy markets, airspace regulations, insurability, lifecycle analysis, failure modes, manufacturability, weather variations, and so on? Rod was right to suggest somewhere that any AI one might program would only tend to choose the same architecture one already was biased toward. Let an AI enter the AWE race then, where are the computer scientists who believe such a project is within current capability? No AI has yet invented anything close. Its up to us to be the smarter minds, for now.

The funny part is that we, humans, can employ AI concepts in our own cognition. Perhaps the most powerful approach is Case-Based Reasoning, such as doctors and lawyers practice. The engineering version taught to us in AWE by Chris Carlin, Boeing (retired), is to look for Similarity Cases to our AWE problems. This is what we do when we look at things like cableways and the history of aviation for clues. At some point before AI is ready, a serious team will draft out a complex scoring matrix to rank architectures. This will not be true AI, but a computer-assisted project. Human expertise will still rule.

1 Like

For clarity that’s not what I suggest. Yes certain historic AI types trained with specific design references (only cat pictures) will say everything is a cat or only develop songs about Jesus. Newer AI forms are more general in their object id and tracking, Thankfully their music generation has also improved with evolution.
Given enough freedom and compute time and the right AWES architectural performance goals …
An AI testing varied combinations of basic string, material and PTO elements could evolve design patterns which kick any of our butts at design.
I hope this happens

AI has thrashed our design capabilities in many fields already @kitefreak … as in the above video … antenna design , structural design, even AI design… these are the classics many more examples exist

The design of an antenna, and similar narrow inventions by existing AI, is not closely comparable to solving AWE. No such current system could be turned from its small problem domain to a far more complex challenge.

Fine
We are in disagreement there then

Fortunately this question will be answered by events, as both AI and AWE unfold.

To chip away at the gap, this is a “constraint-resolution-problem” known to intractable for all the uncertain dimensions. If we can provide the wisdom to judge scaling, economics, manufacturing, safety, and other such complexities, perhaps we can partially down-select the big winning architecture.

If Kite Networks where therefore chosen for computer optimization, no doubt this would help predict forces and dynamics and serve to specify line ratings and so on.

Interesing to read
https://spectrum.ieee.org/energywise/energy/renewables/ready-flyer-one-airborne-wind-energy-simulations-guide-the-leap-to-satisfying-global-energy-demand
There seems to be a fundamental conflict in thinking
Roland Schmehl “The big issue right now in airborne wind energy is scaling,” says Roland Schmehl, associate professor of aerospace engineering at the Delft University of Technology in the Netherlands. “The [AWE] development cycle is extremely long, because the technology is very complex. That’s one of the reasons you want this software. It allows us in the virtual realm to explore design options and other parameters.”
and
LaKSa enables his team to test different configurations without the time, expense, and logistical challenges involved in physically flying them. “For example we can mount larger or smaller kites,” Schmehl says. “We can fly lower or higher. And this tool provides the physical model to fit your system perfectly to a certain location and wind profile. Embedding a tool like LaKSa in an optimization framework, we can identify the optimum kite size with the perfect minimum and maximum tether length with the perfect elevation angle of the tether when we fly out doing these cross-wind maneuvers.”
Yet he is also quoted as thinking
"This kind of configuration is the result of years of simulation and iterative design cycles.”
The iterative steps in the design and simulation phase, Schmehl says, still require a human engineer to guide the process. He does not think a machine learning or deep learning code could be strapped to these simulators to map out the entire design space and discover optimum configurations and flight paths on their own.

Whilst in the same article … Sánchez-Arriaga…
“AWE systems need to operate autonomously in a broad range of varying conditions,” says Sánchez-Arriaga via email. “Therefore, LaKSa is a software package that can be used to study a broad range of systems… The simulator is user-friendly and offers a unique balance between fidelity and computational cost. For that reason, we think that it can be useful for researchers and companies that could study the dynamics and control problems for a broad variety of system configurations.”
and
On the other hand, Sánchez-Arriaga adds, “The modules can be seen as black-boxes that just provide the right-hand side of a system of ordinary differential equations. From this perspective, they are perfectly compatible with a broad set of optimization tools and machine learning algorithms.”

1 Like

Proposed: Classic Kite Autonomous Flight is Embodied AI.

By the time AI truly rivals human creativity, AWE will generally have been worked out by engineers who did not wait. The only current powerful application of AI science is in improving human-expert heuristic reasoning augmented by Net-based knowledge. Having trained in AI methods almost from their modern beginnings, including study under several legendary founders, under notoriously stubborn limits, my ongoing study of computer science, including quantum* and embodied computing, continues to vitally inform my theoretic AWES design understanding. No machine output can surprise us with any true inventive leap, for now. Solving AWE is still up to non-artificial intelligence.

  • Our local Quantum Computing meetup group in Austin (under La Cour) remotely logs onto IBM’s pioneering QC server to perform simple logical operations and explore novel computational problems. Similarly, classic kite autonomy is informing Embodied Computing theory, and analog QC principles illuminate kite passive-control dynamics.

Inherent sensory and actuation gaps, errors, and uncertainty ensure Ampyx cannot “rely on deterministic algorithms to ensure predictable behavior and safety.”

Crash statistics will best settle Ampyx’s claims “deterministically”.

To me optimizing for power generation seems like something well suited to machine learning.
One doesn’t have to give the neural net full control all the time. It could have its place as a part of a system otherwise comprised of deterministic algorithms.
Deterministic doesn’t neccessarily mean safer. I think the Tesla approach to safety verification is valid.

1 Like

The problem is not just Machine Learning starts from current GIGO limitations. Let AI try and improve the advanced power kite. Can’t happen because no AI knows much about kites.

Lloyds of London is the top technology “safety verification” enterprise. In AWE, Fraunhofer Society is our top player in the role. No actual AI in the game.

AWE is too multidimensional a problem for AI for many years still to come. It would have to know everything from insurability and energy market dynamics to aerospace, especially kites. No one is even trying to build that AI for us. That’s smart.

Gigo?

It can’t really be said, that ML neural nets “know anything”. You might be thinking of AGI (artificial general intelligence)

The system learns from data sets comprised of sensory imput (windspeed, aoa, control surface positions). It learns the relations between those without knowing anything about what they stand for.
Later on one will let it experiment by itself in simulation and/or the real world, optimising for power generation (maximize this input).

I was not talking about using an AGI to design an awes.

1 Like

GIGO is classical programming jargon for “garbage-in, garbage-out” processing. Of course artificial Neural Nets can “know” things, as based on biological brain connectionism and Turing equivalences.

Current sensing of kite flight is very crude compared to human pilot sensing capability. When you see a self-driving racing car winning, that will be time for autonomous kites to also reach comparable human performance.

At least there are folks trying very hard to apply computer methods to kites, to settle any doubts about current practical limitations. Here is a baseline human kite flight performance to beat, by Ray Bethel, one of my teachers-

:rofl: You had an acronym for “garbage in, garbage out”. Classic @kitefreak

Yes, AGI neural nets will know things just as much as we do. But that’s not the case with anything done with machine learning / neural nets today.

I disagree. A pilot couldn’t tell you exact readings for anything.

Don’t think there’s a strong correlation. There isn’t even anyone trying to have an autopilot rade against human drivers. Tesla autopilot has less accidents than humans already.
Kites and cars are in a very different environment. The largest part of an autonomous car’s job is to avoid hitting stuff.

There are kites already flying with autopilot today. Just not neural nets, but algorithms. (As far as I know) .
Since you seem to be impressed, by beautiful displays: This is what drone outopilots can do: https://www.youtube.com/watch?v=yRMUNptyTag