Underwater drones

Australian company Advanced Navigation unveiled Hydrus - its new underwater drone. Using pressure-tolerant electronics and advanced inertial navigation systems, Hydrus can operate to a depth of 3,000m. It is also designed to operate autonomously, though can use acoustic and optical communications if needed.

Acoustic communication technology is not new, but other companies such as EvoLogics are developing new platforms for underwater communications. While Hydrus is being used for mapping coral reefs and shipwrecks, its small form factor, autonomy and pressure resistance may make it applicable for military missions.

Formula 1 tracking drone

Dutch Drone Gods and Red Bull Racing have developed a drone capable of keeping up with Formula 1 cars. Designed to capture new filming perspectives, the drone can fly at 350km/h and has a 4k camera for recording. It weighs under 1kg and is remotely piloted with a first-person-view perspective. Though used for commercial filming, the speed, acceleration and manoeuvrability of this drone may have battlefield applications.

New battery chemistries

Previous editions have explored new battery chemistries including disordered rock salt (and others). A new cathode material - a silicon carbon composite - is being research by Group 14. In Nature, a Chinese team have developed new electrolyte solvents that overcomes the slowing of charging rates at low temperatures. Fluoroacetonitrile shows high ionic conductivity even at -70oC, allowing for fast charging at low temperatures. Samsung has announced it will begin mass production of its solid-state battery in 2027. Samsung claim a charge from 8 to 80% in 9 minutes and an energy density of 900Wh/L.

The plethora of new battery technologies that suggest greater energy density and decreased charging times may help increase the rate of electric vehicle adoption. However, with all new technologies a degree of scepticism is healthy; even if such new technologies do deliver on their promises, commercialisation is often slower than anticipated. Therefore, Samsung's announced date of mass production is welcome news.
Air breathing electric propulsion (ABEP)

DARPA has provided funding for a working prototype ABEP engine to power future satellites. Electric propulsion using ion thrusters has been around for some time, where on-board gases such as argon and krypton are ionised and these ions accelerated to very high speeds to produce thrust. This thrust is usually very gentle (about the effort required to push an empty cup across a bench), but in the vacuum of space this can be sufficient to adjust satellite orbits. ABEP engines work by harvesting the thin air still available at altitudes below 90km or so and ionising this as the propulsion gas.

The weight of propellant can account for a considerable proportion of space craft mass on takeoff. The amount of propulsion will also be finite, so ABEP engines may be able to reduce takeoff weight and extend the useful life of satellites in VLEO before needing to be de-orbited.

Generative AI developments

Generative AI is now venturing into games. Deep Mind has published GENIE - Generative Interactive Environments. This research preview was trained on 200,00 hours of video from 2D platform games. Though technically playable, the output is limited to 1 frame per second, predicting what the next frame will be from the player direction input.

The other side of that DeepMind coin is SIMA - Scalable, Instructable, Multiworld Agent. Trained on human instructions and gameplay, SIMA is designed to learn how to play games. Able to perform around 600 actions, SIMA is significant as it can learn to play games without an immediate reward. Earlier gameplaying AI agents used an action-reward approach, learning which actions increased a numeric score and avoiding actions that ended the game. This approach may allow AI agents to interact with a virtual environment more freely and purposefully.

Google has released VLOGGER, a generative video application. Starting with only a single reference image of a person's upper body and a voice clip, VLOGGER can generate a video of that person speaking, including face and arm gestures. This video generation from minimal input is a development from the example of David Beckham's likeness speaking nine different languages in an anti-malaria campaign from 2019. VLOGGER outputs are clearly artificially generated, but this will no doubt improve over time, further reducing the effort required to make content in multiple languages.

Covariant (an OpenAI spin-out) has introduced RFM-1, joining a large language model with a robot. While this idea and approach is not unique (for example: Robot Transformer 2), the RFM-1 can comprehend its surroundings, make informed decisions and adapt their actions based on circumstances; such as identifying and picking items from a jumbled bin. This has potential benefit in warehousing operations, coping with different items, configurations, damaged packing and reducing human injury rates.

Follow Futures Lab on LinkedIn to keep up to date with our latest news.

Attachments

  • Original Link
  • Permalink

Disclaimer

QinetiQ Group plc published this content on 11 April 2024 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 11 April 2024 16:44:05 UTC.