RuM laboratory researches and develops technologies that allow computerized systems to perceive and interpret the world, make decisions, and act. We believe that robotics, as well as artificial perception and intelligence, will play an increasingly important role in the future of humanity – in daily lives, in industry and economy, in politics. The goal of EDI RuM is to become a major player in shaping this future. A player whose research results and technologies are not only recognized by the scientific community but also contribute to the well-being, safety, and health of mankind.
Keywords:
- IoT, wireless sensor networks, smart sensors
- signal and image processing
- AI, xAI, computer vision, machine learning, deep neural networks
- embedded intelligence, accelerators, edge and fog computing, FPGA, SoC
- automation, industrial robotics, real-time control, mobile robots
Part of our perception research covers different kinds of smart sensors, sensor systems, and wireless sensor networks for industrial automation, environment monitoring, and integrity control of complex systems.
We also develop and adopt computer vision algorithms for classification, object detection, and image segmentation tasks. We have developed such algorithms for autonomous vehicles, biomedicine, intelligent transportation systems (vehicle and pedestrian detection and classification, number plate recognition), agriculture (detection of crops and weeds), and video analysis. However, the possible use-cases for our AI-enabled computer vision are much broader.
Currently, deep neural network-based methods are demonstrating the best performance in many machine perception tasks. Therefore, RuM laboratory explores and applies different network models, including CNN, RNN, and GAN, which we train and test on EDI’s high-performance computing center. Since the use of deep learning in many practical applications is hindered by the necessity of a large amount of annotated training data, RuM also researches methods for the generation of synthetic training data and the use of simulated environments in training AI models. Also, our interest in medical image analysis has led us to explore the explainable AI field and bio-inspired learning models.
Embedded intelligence is another significant research area of the RuM lab. We excel at utilizing a variety of design abstractions to push performance limits to the edge. We combine our expertise to develop industrial-grade sensing nodes with very low-latency (<500μs) and advance control algorithms for autonomous driving. For achieving high-performance low-power solutions we design accelerators using specialized chips – field-programmable gate arrays (FPGAs). Our accelerators range from low-level image preprocessing and feature extraction to the DNNs and acceleration of video-based localization for constrained systems. We design Linux-based autonomous flight platforms using modern SoCs (Systems-on-Chip) for future drones, which will be capable of localization without GPS and ground stations.
The robotics division of our laboratory combines the perception and embedded intelligence possibilities described above with the control of robots. It allows us to operate with such technologies as autonomous platforms (driving, flying) and smart production robots, which bring the manufacturing process into the modern digital age (industry 4.0). For increased automation of the production process, the next generation industry robots have to safely cooperate with humans and adaptively manipulate different objects in modern dynamic production conditions. Robots have to become aware of the surrounding environment and use this awareness to adapt and optimize their future actions.
Recent projects
- Silicon IP Design House (SilHouse) part 2
- Artifical intelligence for more precise diagnostics (AI4DIAG) #ESIF
Publications
- Novickis, R., Levinskis, A., Kadiķis, R., Feščenko. V., Ozols, K. (2020). Functional architecture for autonomous driving and its implementation. 17th Biennial Baltic Electronics Conference (BEC2020), Tallinn, Estonia.
- Justs, D., Novickis, R., Ozols, K., Greitāns M. (2020). Bird's-eye view image acquisition from simulated scenes using geometric inverse perspective mapping. 17th Biennial Baltic Electronics Conference (BEC2020), Tallinn, Estonia.
- R.Novickis, D.J. Justs, K.Ozols and M.Greitāns "An Approach of Feed-Forward Neural Network Throughput-Optimized Implementation in FPGA", Electronics journal: Special Issue Advanced AI Hardware Designs Based on FPGAs, 2020
- Druml, N., Debaillie, B., Anghel, A., Ristea, N. C. (2020). Programmable Systems for Intelligence in Automobiles (PRYSTINE): Technical Progress after Year 2, 2020 23rd Euromicro Conference on Digital System Design (DSD). doi: 10.1109/DSD51259.2020.00065
- Sudars, K., Jasko, J., Namatevs I., Ozola L., Badaukis, N. (2020). Dataset of annotated food crops and weed images for robotic computer vision control, Data in Brief, 31. doi:10.1016/j.dib.2020.105833
Recent patents
Head of laboratory
Laboratory staff
Dr. sc. ing. Kaspars Ozols
Deputy director of development, Senior Researcher
+371 67558161[protected]