CAMBRIDGE, England–(BUSINESS WIRE)–Blueshift Memory, innovator of a novel proprietary high-speed computer architecture, has announced that it has been selected to receive one of the prestigious Smart Grants awarded by Innovate UK during early 2022. The highly-competitive £25 million Smart fund helps a group of UK SMEs to swiftly commercialize the best game-changing ideas, which are required to be genuinely new and novel as well as disruptive within their sector.
Blueshift Memory’s 13-month project is entitled “Research on the application of a new generation memory architecture in computer vision AI solutions for IoT devices”. Its aim is to develop a next-generation computer vision (CV) application on edge devices for the Internet of Things (IoT), based around the company’s unique computer architecture (the Cambridge Architecture™).
CV uses artificial intelligence (AI) to enable the content of digital images to be analyzed and interpreted by a computer. This ability for computers to ‘see’ is crucial to solving a wide range of real-world problems in fields including robotics, Industry 4.0, Smart Cities and autonomous vehicles, but it currently requires a significant level of computing capacity that necessitates high power consumption.
“By dramatically increasing memory access speed, the compact CV AI module we are developing will open up use cases such as onboard real-time scenario analysis in body-worn cameras,” said Peter Marosan, Founder and CEO of Blueshift Memory. “It will also help us demonstrate the potential benefits of the Cambridge Architecture IP for larger system-on-chip designs for applications like High Frequency Trading and In-Memory Databases.”
“It is a great achievement for Blueshift Memory to have won a Smart Grant for this work, as it is a highly competitive selection procedure. Our project is one of only 71 out of a total of 1072 applications that were successful in securing funding in this round,” he added.
The Blueshift Memory approach is radically different from the Graphics Processing Units (GPU) that are presently being used for CV AI. The current project involves custom configuration of a Field Programmable Gate Array (FPGA) using deep learning, optimizing it for faster performance and better power efficiency. It is also planned to integrate this technology into an Applications-Specific Integrated Circuit (ASIC), along with leading-edge RISC-V processor capability.