miércoles, 20 de noviembre de 2013

Maven

Installing local dependencies in Maven

Maven is a build automation tool used primarily for Java projects. It uses a Project Object Model (POM) –which is an XML file– to describe the project being build, its dependencies on other modules and external components. The benefit of using maven to build external dependencies is that you do not have to care about downloading all the libraries and versions that are required.


Sometimes there are dependencies that are not available in any Maven public repository, or maybe you want to create a dependency from an own project. This is a suggested practice given that you divide the project in smaller modules, making the architecture and the coding much cleaner and easier.

Maven provides an easy way to achieve this: the install plugin. It creates an artifact to the local repository. Below is detailed the command for installing the artifact:

mvn install:install-file -Dfile=c:\path-to-dependency.jar -DgroupId=your.group.id
-DartifactId=yourArtifactIdentifier -Dversion=version -Dpackaging=jar


mvn install:install-file specifies maven to install a dependency from a file to the local repository.

-Dfile specifies the path to the (jar) file that will be installed to the local repository.

-DgroupId identifies the group your project belongs to. A good practice is to follow the package name rules: it has to be at least as a domain name you control and then you can create subgroups, for example: maven.i2cat.net, maven.i2cat.net.projectA, maven.i2cat.net.projectA.business, etc.

-artifactId is the name of the jar without version.

-Dversion is the version of the dependency (for example 1.0.0).

The command prompt will show some information about the installing process. Once this is done, you can add your dependency in your project:

<dependency>
     <groupId>your.group.id</groupId>
     <artifactId>yourArtifactIdentifier</artifactId>
     <version>1.0.0</version>
</dependency>


To sum up, Maven lets us add external dependencies into our projects. In addition, it provides a plugin for installing new dependencies to our local repository, which is a good practice because the architecture and coding is much cleaner and easier given that the project will be divided into multiple smaller modules.

lunes, 28 de octubre de 2013

Flex

Apache Flex
Today Apache released Flex 4.11 SDK, keeping alive the open source framework to develop Rich Internet Applications that can be executed on a browser, desktop or mobile device using the same programming model, tool and codebase.

Flex was created by Macromedia in 2004 and was based on their proprietary Flash platform. Soon it was acquired by Adobe Systems and released Flex 2. It started to attract developers because of the new Actionscript 3 language, the release of Flash Player 9 and the IDE Flash Builder based on Eclipse platform, and very familiar to java developers.

Since then, Flash Player was not only used to create animations and videogames. Flex turned to be a good framework to create dynamic web applications. Using an object-oriented programming language, a set of user interface components, and the new SDK, developers could build Rich Internet Applications running on Flash Player.

In 2007 Adobe included support for AIR (a desktop application runtime), and same web applications could be built as desktop applications running on Windows or Mac OS.

For the last 5 years, despite of the many enhancements the Adobe team added to Flex and AIR framework, rumors appeared about the imminent dead of the Flash Platform and all the technologies around it.

Good news for Flex developers, Adobe donated Flex to the Apache team, and now it’s not only supported by a growing community, it has been a top level project of the Apache Software Foundation for almost a year now. The last version Flex 4.11 released on October 28th includes several improvements like 15 new user interface components, support for last mobile screens, many other improvements and bug fixes.

viernes, 18 de octubre de 2013

SPA with AngularJS

Introduction to SPA with AngularJS

We are going to introduce two concepts that have changed our perception about frontend web development: SPA and AngularJS. The former stands for Single-Page Application and the latter is a Javascript Framework that helps you making you app meet the requirements of a SPA.

The term SPA is self-explanatory; one SPA web application is an application that holds all its functionalities in one single web page. This changes how we conceived web development for a typical web application; instead of making a request to the server every time a user changes the section in the application (e.g. clicks on a link), the front-end client handles all those changes without requiring the transfer to another page or a page reload.



SPA Architecture

SPAs are rich and responsive applications implemented with Javascript and HTML5 using usually REST for the communication with the server. That means, that the backend will only be prompt when a user really needs information to populate the views (forget having to serve every dynamic html, javascript and css), and that implies a higher throughput per server and higher scalability.

Now is the turn of Angular. AngularJS is a powerful Javascript framework build to ease the development of SPAs. Angular main basis are testability and dependency injection, both are very well known for every Spring backend developer (though should be for each one of us).


It is, in broad terms, an MVC pattern framework with which a developer can design full, reusable components. Its main feature is two-way data binding (A strategy that is commonly used to build Flex Applications).

Suppose that as a developer you create an application with two components, A and B, and that both components share data of a model, for instance a User with its name, surname and age. Two-way data binding means that every time one of the components modify some attribute of that model, it will be instantly modified in the other module without needing to add any extra javascript code listeners or any DOM related code, Angular takes care of everything for you.

Right now we are developing an SPA application with Angular and we are starting to see huge benefits making use of it, we will keep posting here about our progress.

viernes, 5 de julio de 2013

Java RTMP Client

Back to the Desktop

Let’s face it, sometimes web developers can lose track of how difficult some tasks are or, in other words, how browsers save us lots of troubles.

As an example, I bring today one of our latests requirements:


We had to develop two applications, one mobile app to publish video and audio in realtime and a java video player which plays that stream.

At first sight, it can’t seem too difficult; we do have strong knowledge in real time multimedia applications but as always, constraints made it way more difficult: It was mandatory to use RTMP protocol because the stream playback had to be real time

The android application used to send the video was already done (at least it’s core technology) so in order to use it, it was necessary to implement the RTMP protocol in java in order to receive the published streams, and once we had that, decode them (H264 video and SPEEX audio) for its playback.

So conceptually the problem for the java player was reduced to this modules:


To achieve that, we went through 2 different approaches, the first one was use RTMPDUMP and vlcj:

RTMPDUMP is a console-based application that implements RTMP protocol and is able to subscribe to one stream and write it in a file or pipeline it to another process.

VLCJ: is a powerful library that creates an instance of VLC player, which would play the recorded stream from RTMPDUMP.

Although the solution worked, there were many drawbacks, such as latency that made that solution not possible (VLCJ had no support for InputStreams (so no pipelining) and above this, there was a buffer problem: If vlc video buffer reached the end of the file (because it get filled quicker than RTMPDUMP file-write speed) the video will only play until that point.

Second solution:

Xuggler

When I found that library and saw it had built-in rtmp support, I knew there were lots of chances to work all our problems, and so it was.

Xuggler takes care of all, rtmp communication and decoding, making quite easy and fast (least than 250 lines) to have a stream decoded and played.

Finally, a full scheme with all the players related into the application and their functions:


As I started this post, I had never realized the amount of work there is under an html <video> tag; it is able to open a stream, find the correct codec, find its size, scale it to the selected size and play it for us. That’s great!

That’s all for now.

viernes, 14 de junio de 2013

Microsoft Kinect

A new way of interaction

Microsoft Kinect is a motion sensing device developed Microsoft. Initially it was thought as an improvement of the playing experience with the Microsoft’s video game console, Xbox 360. Later, it has become a source of opportunities to create new ways of interaction with all kind of platforms. Its uses come from industrial purposes to health support in hospitals. The Kinect sensor is being introduced in health environments and nowadays it can be seen in surgery rooms for supporting doctors during the surgery or in the patients’ home, helping them with the rehabilitation process.

In the following image we can see the parts that form the device:


For achieving the 3D depth image, one of the 3D depth sensors throws a big amount of infrared rays with a specific pattern. The other 3D depth sensor read point where the infrared rays has found the obstacles of the scene and calculates the distance from the sensor to the obstacle and the rotation respect to the sensor. Once this calculation has been done for each infrared ray, the sensor is able to provide a new 3D depth map so that it can be used by a video game console or a computer. The device includes also an RGB camera that provides RGB maps like an actual webcam and a multi-array of microphones used lately to implement voice recognition.

In order to see in detail the 3D depth sensors running, the infrared rays thrown can be seen as a lot of dots all over the scene if you use a night vision camera, as we can appreciate in the following video:

jueves, 23 de mayo de 2013

Near Field Communication


NFC, new ways of communication

NFC is an emerging technology that allows electronic devices to establish communication between them without any kind of wired connection, only by approaching the two devices until they reach contact. This method of communication is known as "tap and go" or "touch and go" and is so named due to the need to bring the two devices to the point where they touch to initiate communication. The maximum distance to establish connection between the two devices is about 10 centimetres.

There are two different modes of communication: the active and the passive mode. The main difference is that in active mode the device creates its own RF field to send the information, whereas in the passive mode the device doesn’t have to create a new RF field because it uses the one created by the other device.




NFC operates at a very short distance; the optimal range between two devices in order to start a connection is from 2 to 4 centimetres. The communication implies two actors. The active device generates a RF field that can be used by the other device. This feature allows having NFC receptors as simple as stickers or tags because they don’t require any battery or power source in order to work.

These small stickers are normally used for reading information such as personal data, bank account’s information, PINs, etc. Nevertheless, the information contained can be rewritten. They have a range memory from 96 bytes to 4 Kbytes.


There are three modes of communication

· Reader / Writer

This mode allows the mobile to read the data stored on NFC or RFID tags and it is normally used to add additional information in posters, or also for identifying products or storing web addresses related to the object to which it is attached. This mode also has the ability to write data in some tags.

· Card Emulation

This mode allows mobile devices to make bank transactions in the same way that credit cards or debit cards. Therefore, this mode is used for identification, payment or access applications with some control.

· Peer-to-Peer

This mode allows mobile devices to interact with each other. Each phone must be equipped with NFC technology. Thus, the communication starts when both terminals approach at a very short distance. All kind of information can be shared between them: business cards, documents, photos or other personal information.



martes, 23 de abril de 2013

PhoneGap


Program once, deploy to multiple platforms

One problem that comes with Smartphone’s application development is to decide which platforms to develop for. As we can see in the graph, nowadays the most extended Mobile Operating system is Android with 69% market share, followed by Apple iOS with 19%, Blackberry with 4% and Windows Phone with 3%. If you can’t afford to develop one native app for every platform, you have to choose the more important ones and, inevitably, lose users.

Mobile OS market share 2012. Source: IDC

PhoneGap is a framework that allows us to program using HTML5, CSS and Javascript and then deploy these web applications to any Mobile platform like a native application. Therefore, there is no need to program natively for every platform saving time and money.




HTML, CSS and Javascript are widespread languages that have been used for many years in web development. Now, we can take profit of the knowledge and experience of those developers to create mobile apps.

PhoneGap makes possible to access many device gadgets and features from a web application. You can access device accelerometer, camera, files, notifications, etc. using Javascript. (You can find all the information about PhoneGap API here: http://docs.phonegap.com/en/2.6.0/index.html). However, it has some limitations in front of a native application. PhoneGap API allows the use of some device features but it can’t take full advantage of all the device possibilities. Moreover, you depend on the mobile native browser to run the app and this could be troubling if you want to use some HTML5 features. For example, Websockets aren’t yet supported for all mobile browsers (If you want to know what browsers support any HTML feature visit http://caniuse.com).

To sum up, PhoneGap gives us the flexibility to develop for a widely range of mobile OS just using web languages. However, it can’t take profit of all device features and this makes this framework not suitable for developing some kind of applications.

lunes, 25 de marzo de 2013

Web Real-Time Communications

WebRTC, a huge leap in multimedia web-based solutions

Over the years, we have been involved in the development of several video-conference platforms with different requirements such as multi-conference or real-time file-sharing, to point out some examples either for enhancing professional-to-professional or patient-to-professional communication in our medical environment.


Back then, when those projects started, the only reliable platform to achieve those requirements was Adobe Flash ©, which, by the way, has proved to be fully functional in all our possible scenarios, that is, not just state-of-the-art test connection scenarios but real ones such as DSL or even lossy packet scenarios like WiFi or 3G, all of this, in hospital environments where security is not an option but a requirement (firewalls, proxies,VPN, etc).

 

However, post-PC era has arrived and has brought some significant changes into the market that needs to be adjusted.


For example, there are some constraints in all smartphone or tablet browsers; they don’t support Adobe Flash © anymore, so it is mandatory to find another solution capable of delivering the same results we already have in terms of reliability and quality.


There seems to be several possible solutions out there, but, from our point of view,  the most promising one is webRTC, being conceived as a browser end-to-end solution from scratch, which addresses many problems with a javascript API.


According to its information webpage: https://sites.google.com/site/webrtc/
“WebRTC is a free, open project that enables web browsers with Real-Time Communications (RTC) capabilities via simple Javascript APIs. The WebRTC components have been optimized to best serve this purpose.”
The main aim isto enable rich, high quality, RTC applications to be developed in the browser via simple Javascript APIs and HTML5.”

It brings out of the box:

  1. P2P communication between browsers
  2. NAT traversing (ICE and STUN)
  3. Adaptive QoS based on bandwidth allocation
  4. Automatic Multimedia Discovery
  5. *Audio and Video Recording
  6. *P2P File Sharing

That being said, how easy is to develop an application that makes use of that technology?

var userMedia = navigator.webkitGetUserMedia({video :true,audio: true}, function(localMediaStream){
var video = document.querySelector('video');
video.src = window.webkitURL.createObjectURL(localMediaStream);
$('#videoPlayer').on('loadedmetadata', function() {        
});
}, Error);

That simple piece of code:

  1. Detects the hardware available resources in the computer according to our requirements (in this example, audio and video resources)
  2. Let’s us choose the right stream source (in case you have 2 cams for example)
  3. Opens and creates the media stream/s
  4. Starts playing it on a canvas element in our html (in our case video is a canvas)

And of course, that code will work across all API-compliant browsers (currently Chrome, Firefox and Opera, but also IExplorer plans to implement it).

At this stage, we don’t know if it will be as good as it seems but it is definitely worth doing a research effort to discover if is a compelling alternative to our stable Adobe Flash © platforms.

*Currently drafts for both APIs have been released.