Didn’t we know already since the early '00s at least with the DARPA Grand Challenge - Wikipedia that a machine learning approach is more robust? What are you doing if not trying to make an AI pilot?
My assumption has always been that AWE will lag behind full self driving a bit, you’re trying to solve the same kind of problem but you have smaller teams and budgets, so you have little choice but to try to apply the advances of self driving to your system. But then your system is perhaps easier, you’re perhaps less dependent on vision, except for spotting birds, and can do more with accelerometers and other sensors.
One of the many takeaways that I liked about the talk is that you can design a kind of Grand Theft Auto for your car to move around in and learn to navigate. For an AWE that might be an extremely gusty and congested sky with tethers constantly failing.
I have not heard of anyone using deep learning to control a kite yet. And I see many companies dont seem to log many flight hours. Then I add two plus two.
I dont like to comment Kitemill specifics much, but we have very good pilots that take over when things are not going according to the script. Also flying a kite on a tether for many reasons is harder than flying a model airplane.
I experimented with a neural network to steer the kite for the first AWE system I developed. (Supervised learning). It did work, but didn’t really provide any benefits (in my case) beyond traditional path planning and actuator control.
We have considered using reinforcement learning to optimise the control strategy for our hybrid turbine, but so far we have opted towards a more conventional wind turbine control strategy. Even controlling a wind turbine with just two actuators can be a daunting task.
The software Ampyx presented at AWEC2019 had a whole load of sensing, smarts, redundant levels, sanity checks. I think it was able to adapt it’s working model to multiple simultaneous flaws and was able to recover the plane from serious failures, tangles and inversions.
The software seemed to have been planned and written meticulously.
I feel control is the hard problem to solve. It’s okay to put it off if you’re just doing a university project or you don’t have the funding yet, but eventually you have to tackle it. Am I wrong in thinking deep learning is really the only way to get robust performance, just like it is for full self driving? If it isn’t, why? If no AWE company is doing deep learning, my hot take now is that they are all 15 years behind the curve.
After you’ve figured out single kite AWE, I bet multiple kite AWE will be the next challenge. I think you’ll have a head start if you already have little AI pilots ready.
Behind the curve is a bit harsh given noone is doing it?
This is really somewhere I’d say it depends. For something like @Rodread’s Daisy, steering is of less importance. For the likes of Kitemill, Ampyx, Twingtec, Skypull etc, I think maybe you are right. And also I think saying something is the only solution is a bit strong.
Deep learning at the least is a very intrieguing option. And I do think robust control is way more important than finding an optimal path. And look which subject is written more pages about.
I would love if you could say a little bit about what the neural network would do. From the code I just see inputs and nodes, so it is difficult for me to figure out what you have done here. This may just be a world first pioneering effort…
The network was used to control the guidance of the kite (in flight). I decided to use machine learning (Supervised learning) to familiarise myself with the concept of a neural networks and to see if it would be helpful in solving AWE related tasks. So I’m definitely not an expert and this was my first attempt to use ML for AWE applications.
The network had 3 inputs: x, y, theta and one output.
For the input I mapped the kite position in the sky to a two dimensional coordinate system (x, y). x being left to right, y up and down. y=0 would be the ground and y=1 would be “12” o’clock. Theta was the current heading angle (in the plane X-Y plane) 0 to 2 pi mapped to 0 to 1.
So the first layer had 3 neurons.
Then a hidden middle layer with 30 neurons.
Lastly a continuous output layer with 1 neuron, which was basically the desired steering curvature. This was mapped to a delta line length.
The kite was trained with 3 dataets I created by hand flying in a figure of 8th in simulation.
To me it showed that such a simple task could be solved by an extremely small network. However In the end I decided to use conventional 2D path planing as it was simpler and sufficient.
Except the neural network has a lot larger potential for complex funtionality.
The neiral net I am envisioning would also have:
flying on slack and taught tether, and vtol
figuring ut the speed and strength of the wind
maybe with vision figure out the elevation angle
maybe with vision figure ut the exact distance to the landing pad
avoiding stall
maximize power output
figure out how to transition between modes
VTOL landing
But I see you would have to begin as you did.
The question is really when is it appropriate to start working on neural networks. It requires deep knowhow and lots of R&D. So until these tasks cant be sufficiently well solved by traditional methods, it is not worthwhile.
The «killer app» for me would be to make the kite robust against hitting the ground. This is very difficult to program in a way that covers all things that could happen. But if an AI could do this well, it would make it so much easier to get many flight hours, which to me sounds like a prerequisite to success in AWE
You can see @someAWE_cb performing this exact function in his latest test
Using a small loop radius and the fact he’s so tall just about satisfies this need.
Maybe if he used his wobbly anemometer mast to catch the top of the turbine it would be more effective… That’s kinda what I’m planning to try soon. TV crew here tomorrow to film our kite flying. Wish us luck. The old kite hasn’t been in the air in over a year now. It’s not changed much.
The team’s algorithm, called DERL (Deep Evolutionary Reinforcement Learning), could help researchers design robots that are optimized to perform real-world tasks in real-world environments. “If we want these agents to make our lives better, we need them to interact in the world we live in,” said study first author and computer science graduate student Agrim Gupta
Seems easy, start with a kite hanging from a point, with wind blowing downward. Have it avoid a vertical wall a fair distance away. Gradually move the wall closer, gradually shift gravity so the wall becomes the ground. Introduce obstacles, introduce gusts, introduce wind shear. Reward energy output.
Interesting topic! I think it definitely makes sense to frame the problem as a reinforcement learning problem. You can make use the kWh output of the system as a reward signal, so that it will find the optimal flight path to maximize power output over time.
To achieve this we need a very accurate simulator in which the AI can be trained.
I do think that for many system configurations it might be possible to make it work with more traditional path planning & control and in my opinion this is the preferred method if it can be reliable in all edge cases. Using AI will add a lot of complexity and randomness.