Altran, announced that it is delivering enhancements to its ENSCONCE edge computing platform. The platform now integrates Open Network Edge Services Software (OpenNESS), an open-source toolkit developed by Intel, along with other Intel technologies. The platform combines multiple capabilities, accelerators and frameworks for rapid development of multi-access edge computing (MEC) solutions. As a result of the integration, infrastructure resources will be able to increase computing and I/O performance and reduce network latency. Altran’s ENSCONCE platform can reside on micro data centers close to the access network, aggregation points, regional data centers and central offices, reducing the barrier for application developers to host their edge applications. The platform offers several features for developers, including low-latency edge application development through software development kits (SDKs). The SDK provides edge applications on demand, discovers edge deployments, orchestrates applications across operator networks, and monitors and manages applications throughout the lifecycle. Incorporating OpenNESS micro services into the ENSCONCE platform will support accelerated virtual switching. OpenNESS is an open-source MEC software toolkit that enables edge platforms to onboard and manage applications and network functions with cloud-like agility across any type of network. Based on a microservices architecture, it provides highly optimized building blocks that can be composed into new edge platform applications and services. With OpenNESS, ENSCONCE is able to deliver uniform interfaces for 4G and 5G mobile networks and will maximize the advanced capabilities of Intel Xeon processors and accelerators to boost computing and I/O performance and cut network latency. With OpenVINO, applications on the ENSCONCE platform can execute low-latency inferencing on a variety of AI accelerators, including Intel Xeon Scalable processors, Intel Movidius Vision Processing Units (VPUs), Intel Field Programmable Gate Arrays (FPGAs) and neural accelerators. It also provides a library of optimized inferencing models for computer vision and other domains.