About Me (Nacir Bouali)
Welcome to my personal space at the University of Eastern Finland Wiki. My name is Nacir Bouali. I'm currently a PhD candidate in the Doctoral Programme in Science, Technology and Computing at the School of Computing in UEF. I'm doing research in the field of Natural Language Processing applications in Education. I have a Master's degree in Software Engineering, and a Bachelor in Software Design. I worked in the past in teaching, as both a lecturer and a lab instructor; in research, as a research assistant; and also as a web developer. I now am working on educational games, both for programming education and language learning.
The main idea behind my PhD project is the use of NLP to enable an automatic conversion of text to VR or 3D animations. You can see in the video (right column) an illustration of this idea from my Master's thesis work (although in 3D not VR). We also explore interaction between children and characters in VR through dialogue. Results of my research, including research papers, demos and prototypes will be published here. So please visit my page from time to time to get more updates.
1st Update- Creating and customizing virtual environments
One of the milestones of enabling the conversion of natural language stories into VR is to allow users to describe the environments where stories take place. In this version of the system, we allow users to customize their scenes either using text or using the mouse. You can see in the video (right column), that a user is describing a meadow and then is able to change the locations of both trees and houses using the mouse. We, currently, are working on enlarging the database of props and environments to better suit the needs of our target users.
2nd Update- Proposing a new system architecture
One of the key issues we previously had with the text-to-animation system is performance. As the NLP and the graphics modules sit on the same machine, the system experienced slowness. We thought of segregating both modules and have them run on different machines. The new implemented architecture has a NLP module running on a server (NLP module created with NLTK instead of OpenNLP used in the previous version of the system), and a graphics module running as the client, both communicating thanks to websockets. The work is now focused on developing a visual semantic parser to better the results of the NLP module.
3rd Update- Possible Spinoffs
The architecture of the system we have proposed allows us to create a number of graphics engines and have them connected to the same Natural Language Understanding NLU module. We have built on this idea, and developed a small graphics client, which children can use on their parents' phones with a low-cost VR headset and a BT controller, it will allow them to rearrange words on little clouds to form simple sentences, which are then turned into animations. We are also working on giving the opportunity to a super-user (an instructor) to select which words should appear in the game, subsequently which animations will result. This can be very useful, as an instructor can change the words each week, and the children can practice these words in VR stories. The game works so far great, the demo is added (4th video on the list), also if you want to give it a try, let me know! I can send you a copy, and would very much like to hear some opinions.
2. Other Research Projects
With the idea of teaching the English language with VR, we wanted to extend our work and teach the basics of programming with animations. In this small project, we want to give kids a system where they can write their stories in object-oriented. We do not aim to teach the basics of OOP, we believe the instructors in class can simplify the basics and then the kids can practice on our system. For instance, the kids can develop small stories in natural language, and convert each of the sentences in the story to an object-oriented statement, here's an example below:
This is a little bit complex for children, we still have a lot of work to do to simplify the statements. But this has the potential to teach kids programming concepts, objects and behaviours for instance. It can also teach them logic, for example, if a sentence says, "The rabbit ran", they need to instantiate a rabbit before they can call the behaviour run(). See the demo on the right (5th video in the list). Get in touch if you want to learn more about this project, or (even better) if you want to collaborate to make it more suitable for children.
Brief Video Introducing a prototype of the project (presented in AIED 2018)
Customizing Scenes with Text (disabled in current version)
imikathen Demo- Handling Multiple Sentences
VR Game- Demo
Animate Stories with Code