Saturday, October 26, 2013

DevFest West 2013: Lightning Talk: Learnings, Prototypes & Use Cases on Google Glass



According to IMS Research, the wearables market is poised to grow from 14 million devices shipped in 2011 to as many as 171 million units shipped by 2016!  According to a recent Business Insider report, "those betting big on wearable computing believe an assorted new crop of gadgets — mostly worn on the wrist or as eyewear — will become a "fifth screen," after TVs, PCs, smartphones, and tablets."

I recently was invited to talk at the DevFest West 2013 held at the Google campus in Mountain View where I presented a Lightning Talk on "Learnings, Prototypes & Use Cases on Google Glass".
The talk provides insights and lessons learned from innovative experiments for building innovative services for Google Glass for capturing financial data picture and for mobile payments.  It also covers a number of Glass Use-Cases as well as Glass Prototypes that we implemented across Intuit.  If you find the presentation slides below useful, then add a comment here or follow me @tasneemsayeed for future postings!


View more documents from tasneemsayeed.

Wednesday, July 10, 2013

Learning on Accessibility for the iOS Platform

According to Apple's Accessibility Guide for iOS, you should make your iPhone application accessible to VoiceOver users because:
  • It increases your user base. You've worked hard to create a great application; don’t miss the opportunity to make it available to even more users.
  • It allows people to use your application without seeing the screen. Users with visual impairments can use your application with the help of VoiceOver.
  • It helps you address accessibility guidelines. Various governing bodies create guidelines for accessibility and making your iPhone application accessible to VoiceOver users can help you meet them.
  • It's the right thing to do.
As of iOS 3.0, Apple has included the UI Accessibility programming interface, which is a lightweight API that helps an application provide all the information VoiceOver needs to describe the user interface and help visually impaired people use the application.
I had recently given a presentation on my Learnings on Accessibility for the iOS Platform at an internal event at Intuit, which I wanted to share with all of you.  It provides an overview on what it means to make an app accessible for the iOS platform. It also provides guidelines for making your iOS app accessible and includes an overview on the most common accessible attributes, traits and how to add Accessibility via interface builder as well as in code. It covers Accessibility Notifications, VoiceOver specific API, Accessibility Containers, and some of the best practices for Accessibility.
 
If you find the presentation helpful in making your iOS app accessible, feel free to send me a comment!  
Enjoy making your iOS app accessible!

Saturday, May 18, 2013

Google I/O 2013: An In-Depth Developer's Perspective


In comparison to last year's Google I/O featuring the spectacle augmented reality Google Glass  announcement via a breathtaking sky-diving presentation, this year's Google I/O may have seemed to lack luster with no new hardware.

Nonetheless, Google I/O kicked off a record 3.5 hours Keynote with a heavy focus on software and services. The Google I/O Keynote discussed services and feature upgrades for both Android and Chrome. If you were expecting a brand new Android phone or tablet announcement, then you may have been disappointed. However, a major takeaway was a unified user experience for Chrome and Android platforms through shared services.

According to Google, Android was at 100 million activations in 2011. And, it was at 400 million in 2012.  And now in 2013, Google released the numbers for the Android activations at an incredible 900 million!   Now that's no small feat...

A huge applause from the audience came when Google mentioned the introduction of Android Studio IDE based on IntelliJ!  The Android Studio IDE tool has more options for Android development making the process faster and more productive. A live layout was shown that renders your app while you are editing it in realtime. It allows you to see a variety of device layouts (i.e. different form factor phone devices and tablets). Refer to TechCrunch's excellent review on Google Launches Android Studio and New Features for Developer Console Including Beta Releases and Staged Layouts for further details.

There were several other significant announcements made at Google I/O including:

  • Google Maps gets a complete overhaul and is released as part of Google Play Services.  Three new Location APIs introduced include:
    • Fused Location Provider API utilizes all of the communications sensors in the phone including WiFi, GPS and Cell network while significantly saving battery life. This is a new service that greatly improves any application that uses location services.
    • Geofencing API allows apps to inform the user entering or exiting a configured virtual fence. The API allows each app to define up to 100 geofences simultaneously. Apps utilizing this service will provide better battery life and performance.
    • Activity Recognition API utilizes the device capabilities of the hardware and machine learning to determine whether the user is walking, cycling or driving. This allows apps utilizing the service to adjust their behavior depending on the user's mode of transport. It is done in a very battery efficient way as no GPS is required.
  • Google+ Single Sign-in API is a cross-platform Sign-in API.  Fancy was used to demo during the Keynote that if you find a cool website that you like, you can sign-in with Google+ and then you will automatically be asked if you would like to install it. If you indicate yes, the the app will download and log you in automatically on your current device and other Android devices on your account - now that's very cool!
  • Google Cloud Messaging (GCM) for Android gets an overhaul. GCM is a service that allows you to send data from your server to your users' Android-powered device, and also to receive messages from devices on the same connection. The new features to GCM include:
    • Faster, easier GCM setup
    • Upstream messaging over XMPP. GCM's Cloud Connection Service (CCS) lets you communicate with Android devices over a persistent XMPP connection. The primary advantage of CCS are speed and the ability to receive upstream messages (i.e. messages from a device to the Cloud).
    • New API for Synchronizing Notifications. Maps a single user to a notification key, which you can then use to send a single message to multiple devices owned by the user.
  • Google Play Music All Access was launched in the US at I/O. It allows you to buy a music subscription from Google and costs $9.99/month and comes with a 30 day free trial. It allows you to explore millions of music tracks so it seems great for music discovery than its competitors (i.e. Spotify, Pandora). It is a simpler on-demand-meets-radio service with various personalization features, and works on phones, tablets and web browsers. From a design and UX perspective, it makes it easy to switch between the hands-on and hands-off experiences. If you are not sure of what you want to listen, you can just hit "Listen Now" and start listening to something right away. And, when you want to search on Google's huge on-demand catalog, you have that option as well.
  • Google Search. Google I/O 2013 marked the "end of search as we know it". Google announced that it is looking to change the way users go about finding information by expanding its voice search capabilities and through its various services such as Google's Knowledge Graph and Google Now with a credo the company has labeled "answer", "converse" and "anticipate". In other words, the company's core product will eventually respond better to naturally phrased questions. Anticipate means that Google Search will be able to guess what information users need the most and provide it for them easily via its Google Now service. The Google Now service pulls information from across Google services to act as a personal assistant of sorts by offering information on users' commutes, appointments and news from their favorite sources in the same place. Conversational Search was announced to be coming to all desktops and laptops via Chrome.
  • Chrome. Google announced that today there are 750+ Million active users of Chrome and that Chrome is increasingly used on mobile. Chrome design goals are: speed, simplicity and security. Google's Sundar Pichai said, "The same capabilities that you're used to using for Chrome on a desktop are going to be coming to Chrome on Android." Thanks to WebGL and Web Audio APIs, you will start seeing quite impressive web experiences including games and rich interactive environments that were typically limited to the desktop environment.
    • Better Web Imaging. Google compared JPEG vs WebP images. The quality of the image was indistinguishable, but it was about two-thirds in size (i.e. 31% reduction in file size). This will significantly improve the load times on websites as well as help users to not exceed their data plan limitations. Furthermore, WebP supports animated images as well.
    • Better Video compression.  Google compared H.264 vs VP9. It was noted that the quality of the VP9 video is the same as H.264. However, the size of VP9 comes in at less than half the size (i.e. 63% reduction in file size).
  • mCommerce. Google reported that when it comes to shopping on your phone, the percentage of people that come to the purchase screen and then get out from there is incredibly high in the 90%. Google made 3 key announcements around mCommerce:
    • Consumer launch for pay by Gmail which is rolling out slowly with initial rollout in US
    • Two new APIs announced for developer launch:
      • Google Wallet Object APIs. The vision is to digitize whole Wallet by allowing insertion of any kind of Objects into Google Wallet
      • Google Wallet Instant Buy for merchants selling physical goods. The goal is to allow consumers to make purchases within 2 clicks thereby improving the user experience.
  • Google+  Google introduced 41 new Google+ features at I/O!  First of all, Google+ gets an overhaul on its design. It is a multi-column design where its width will scale depending on the device that the user is on. It includes animations, flip and fade. It can do image analysis, recognize the image and hashtag it automatically.
  • Google Hangouts is now a standalone app that will work on the web desktop, Android and iOS all announced starting at I/O. This will provide an on-going conversation within the hangout and doesn't end when you sign off. Also, RealTime Communication, that is, group video is available at no charge.
  • Google Photos: Google announced earlier that in addition to unlimited backup of all of your standard sized photos, it will now give you 15 GB of storage for full-sized images. Google also announced machine learning algorithms that help to create a highlight reel for all your photos. The service will check all your photos in the album for blurriness, smiles and several other criteria (learnt by hundreds of actual human photographers, and produces a highlight reel.) Google also introduced an "auto-enhance" feature that will instantly adjust tonal distribution, red-eye reduction, skin softening, noise reduction and several other criteria to automatically enhance the picture. And, finally, Google introduced a new "Auto Awesome" mode. That is, if you take a burst of photos, it will make an animated gif out of them. If you take a series of screenshots, and if some are dark or if someone is smiling in one, but not the other, it can make a composite image similar to what the Galaxy S4 can do without the user having to activate that setting. And, that sounds cool!  It can also handle screenshots in "Motion", HDR, and panoramic photos.
And, no developer review would be complete without the mention of the Developer Sandbox, which occupied 2 floors, with dedicated ones for Android, Wallet, Chrome, standalone ChromeBook Pixel display, Google+, Photos and Google Play. And, not to forget Google Glass had an extra large sandbox which was often crowded with spectators!  Google started shipping an early version of Google Glass known as Explorer mostly to a couple of thousand developers who had requested them at last year's Google I/O conference - forked over a hefty price of $1500!  
I was fortunate to explore Google Glass and the experience was truly amazing! Glass is not like wearing a laptop on your face, though it does have a 12 GB of usable storage (16 GB total), synced with Google Cloud. Glass is more like augmented reality spectacles, but without lenses. Its lightweight frame rests on the ears and nose, suspending a small prism at the upper right corner of the wearer's field of vision. Glass has a tiny touch pad built into one earpiece and a microphone to pick up voice commands. Interestingly, the earpiece uses "bone conduction" to deliver sounds by vibration against the wearer's head. You can speak aloud to get information  or tell Glass to "take a picture" or "take a video" (up to 10s  snippets). You can ask questions like "Google find me a restaurant" and see the results in "Knowledge cards".You can also send texts and make phone calls. If you swipe toward the back, you can see Google Now cards for flight, traffic and other information.  In summary, Google I/O 2013 showcased several interesting innovations across Google products.

If you found this article useful or want to share your developer perspectives on Google I/O 2013, feel free to share your comments below, follow me by clicking on the upper left hand corner and/or follow me on Twitter @tasneemsayeed.

Tuesday, April 2, 2013

Best Practices for Mobile App Development on Android

Designing and building apps that look great and perform well on as many devices ranging from smart phones to tablets is crucial to ensure an optimal user experience.
At the recent DevFest Silicon Valley event held at Google on March 15th, I had presented a talk on Best Practices for Mobile App Development on Android.  The talk focuses on the Golden Rules & Best Practices of Performance including how to keep your apps responsive, how to effectively implement Background Services, tips for improving the performance and scalability of long-running applications, and briefly on Best Practices for User Experience and concluding with Benefits of Intents and Intent Filters.  For further details, please refer to the slides attached below.


If you enjoyed this article, you may want to follow me (@tasneemsayeed) on Twitter. I always announce significant new blog posts and interesting mobile topics via a tweet. You can also subscribe to this blog.  Feel free to post any comments that you may have below.
View more documents from tasneemsayeed.

Friday, March 8, 2013

Implementing Singletons for the iOS platform

The Singleton design pattern is one of the most frequently used design pattern when developing for the iOS platform. It is a very powerful way to share data across different parts of an iOS application without having to explicitly pass the data around manually.

Overview

Singleton classes play an important role in iOS as they exhibit an extremely useful design pattern.  Within the iOS SDK, the UIApplication class has a method called sharedApplication which when called from anywhere will return the UIApplication instance that is associated with the currently running application.

How to implement the Singleton Class

You can implement the Singleton class in Objective-C as follows:

 MyManager.h


@interface MySingletonManager : NSObject {
    NSString *someProperty;
}


@property (nonatomic, retain) NSString *someProperty;

+ (id)sharedManager;

@end

MyManager.m

#import "MySingletonManager.h"

@implementation MySingletonManager

@synthesize someProperty;

#pragma mark Singleton Methods

+(id) sharedManager {
    static MySingletonManager *sharedMySingletonManager = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        sharedMySingletonManager = [[self alloc] init];

    });
    return sharedMySingletonManager;
}
                  
- (id)init {
    if (self = [super init]) {
        someProperty = @"Default Property";
    }
    return self;
}

- (void)dealloc {
    // should never be called, but included here for clarity
}

@end

The above code fragment defines a static variable called sharedMySingletonManager which is then initialized once and only once in sharedManager.  The way that we ensure that it is only created once is by using the dispatch_once method from the Grand Central Dispatch (GCD).  This is thread safe and handled entirely by the OS so you do not need to worry about it at all.

If you rather not use GCD, then you can the following code fragment for sharedManager:

Non-GCD Based 

+ (id)sharedManager {
    @synchronized(self) {
        if (sharedMySingletonManager == nil)
            sharedMySingletonManager = [[self alloc] init];
    }
    return sharedMySingletonManager;
} 
 
Then, you can reference the Singleton from anywhere by calling the function below:

Referencing the Singleton

MySingletonManager *sharedManager = [MySingletonManager sharedManager];
Happy Singleton'ing!   If you find this post useful, then mention me in the comments.

View more documents from tasneemsayeed.