Development of camera-based navigation on android OS

Modern mobile devices and technical progress allow viewing mobile devices as a separate development platform. Main feature of this platform is possibility to create applications that by accessing device peripherals allow solving problems of such class that is impossible to solve on a stationary device.

Existenceof mobile operational systems and SDKs for them allows developing full-featured applications, which concede or even surpass their analogs on workstations. Google Android is an open-source operational system based on lightweight Linux kernel which is able to execute Java applications. Application development can be performed on workstations under OS Windows, Linux or MacOS with Eclipse IDE available for each of these operational systems.

 Developer has access to device emulator with ADT Plugin, most features of which are implemented for Eclipse-IDE that is a de facto Android OS development standard, although another IDEs also do exist. Android OS is based on Linux kernel which is directly responsible for system functions. Java applications are executed in Dalvik virtual machine on infrastructure level. Application level is located above infrastructure level, where Google applications are, that can also be used as structural parts of new applications. One of the possible problems which can be solved only on mobile devices is camera-assisted navigation in urban surrounding. It is proposed to implement a program complex which analyzes current street surrounding, recognizes traffic signs and warns user about danger with audible or visual signals beforehand.

This system will be most useful for people with vision problems. Solution to this problem is a solution to pattern recognition problem. It is proposed to use OpenCV library to solve the pattern recognition problem, that will be used to recognize traffic signs, that are relevant to the pedestrian, traffic light recognition and transport flow analysis, in particular its speed. Recognition will be performed on each frame, video stream will be captured from mobile device's camera. An analysis of current OpenCV for Android implementation, in particular getting and processing of camera video stream, shows that low-power and low-memory devices perform C++ JNI calls very slow and that these calls are rather costly in terms of resource management. It leads to low performance of the application, which is confirmed by OpenCV developers themselves.

It is proposed to use cloud computing in form of client-server messaging to solve this problem. Cloud computing is basically a further client-server architecture development which is already widespread on workstations and stationary computers. Cloud technologies give user access to the so-called "thin" client, or, in other words, GUI and user interaction logic. One of advantages of distributed computing lies in more powerful server machine, which can provide more computational power and storage memory than a mobile device. It is possible to let the server to process the video stream and show the result on a mobile device. A critic of distributed computing is based on a possibility to compromise the user data because details of implementation and length of storing of user data is unknown. Current development step is research that allows measuring the performance of distributed application and profit of using them.

Thursday, December 1, 2011