Wednesday, May 7, 2014

Quick Tutorial: Implementing Your Own Delegates in Objective-C

The delegate pattern is another simple, yet powerful design pattern. Many UI elements in iOS (i.e. UIScrollView, UITextField, UITableView,etc) use delegates to control their behavior. A delegate is an object that acts on behalf of, or in coordination with, another object when that object encounters an event in a program. The delegating object is often a responder object—that is, an object inheriting from NSResponder in AppKit or UIResponder in UIKit—that is responding to a user event. The delegate is an object that is delegated control of the user interface for that event, or is at least asked to interpret the event in an application-specific manner. For a more thorough description on delegates, refer to the Apple documentation.
In order to implement your own custom delegate protocol, you will need to 
  1. Modify the header (.h) file for MyClass (i.e. MyClass.h)
  2. Add the @protocol delaration
  3. Add a delegate @property
  4. Declare the methods that delegates can implement.















Next, make sure to check if the delegate is set and that it responds to the selector anytime you want to call the delegate within your implementation.









Next, for the classes that you want to conform to your new protocol, include MyClass.h header file and delegate protocol in the @interface





Finally, set its delegate to self somewhere and implement the delegate method




















View more documents from tasneemsayeed.

Tuesday, May 6, 2014

How to Send Email Within Your iPhone Application

This article provides a tutorial to help you send an email from inside your iPhone application using the iPhone SDK built-in APIs.
The iPhone SDK provides the built-in MessageUI framework, which greatly simplifies the implementation of email functionality within an iOS application.

Creating a Simple Email UI

Create a simple app with a UI View Controller and call it the name, "SimpleEmailAppViewController". Then, add a button to the UIView by dragging it from the interface builder, and rename the button title to "Contact Us".

When the user taps the "Contact Us" button, the app will display the email user interface.

Making Connections to the User Interface

In order to make the connections from the "Contact Us" button to an action, you need to select the SimpleEmailAppViewController.xib file from the Project Navigator then go to the Interface Builder. Switch to the Assistant Editor while hiding the Utility area. Once this is done, the interface and its corresponding code are displayed side by side. Next, press and hold the key and click the "Contact Us" button and
drag it towards the "SimpleEmailAppViewController.h. As you place the pointer just below @interface and before the @end, and as you release the mouse button, you will notice a prompt that allows you to insert an outlet and action.
Select "Action" for "Connection" and enter "sendEmail" for "name" as shown below.

The event can be kept as "Touch Up Inside". When the user clicks the "Contact Us" button and lifts up the finger inside the button, it will result in invoking the "sendEmail" method.  Once, you click Connect to confirm the changes, Xcode automatically adds the method declaration into "SimpleEmailAppViewController.h" file.

Implementing the Email Interface

In "SimpleEmailAppViewController.h", implement the "MFMailComposeViewControllerDelegate" as shown below.

In "SimpleEmailAppViewController.m", implement the "sendEmail" method. Also, add the implementation for the delegate method, "MFMailComposeViewControllerDelegate".  Note that we will also be utilizing the built-in API from the iOS SDK called "MFMailComposeViewController".
The "MFMailComposeViewController" class provides a standard interface to allow the editing, and composing of an email message. You can use this view controller to display a standard email view within your iOS application. We populate the fields of this view with initial values including the recipient email, subject and body of the message.

This view controller class also includes the following delegate method:
(void) mailComposeController:(MFMailComposeViewController *)controller 
  didFinishWithResult:(MFMailComposeResult)result error:(NSError *)error

This method is invoked when the user cancels the operation and the mail composition interface is dismissed. The result parameter tells you the result code when the mail composition interface is dismissed. In a real-world application, the app should display any errors that may occur if the app fails to send the email message.

Linking with the Message UI Framework

If you try to build the application, you will notice that you will get build errors. The "MFMailComposeViewController" class is built-in within the MessageUI framework. 
To fix the compilation problem(s), you will need to add the MessageUI framework so that it is linked properly with your application.  In the Project Navigator, select the "SimpleEmailApp" project and then select "SimpleEmailApp" target under Targets.  Then, click "Build Phases" at the top of the project editor panel. Then, click on "Link Binary With Libraries" section. 














Next, click the "+" button and select the "MessageUI framework". After you click the "Add" button, Xcode will include the MessageUI framework. This should fix your error(s), and you can now run your application.  When you tap on the "Contact Us" button, it will display the email composition window with the pre-populated email content.

If you find this tutorial helpful, and would like to see more of these types of tutorials, then share a comment!  Or if you have any other suggestions, do let me know as well.  And follow me on twitter (@tasneemsayeed).
View more documents from tasneemsayeed.

Saturday, October 26, 2013

DevFest West 2013: Lightning Talk: Learnings, Prototypes & Use Cases on Google Glass



According to IMS Research, the wearables market is poised to grow from 14 million devices shipped in 2011 to as many as 171 million units shipped by 2016!  According to a recent Business Insider report, "those betting big on wearable computing believe an assorted new crop of gadgets — mostly worn on the wrist or as eyewear — will become a "fifth screen," after TVs, PCs, smartphones, and tablets."

I recently was invited to talk at the DevFest West 2013 held at the Google campus in Mountain View where I presented a Lightning Talk on "Learnings, Prototypes & Use Cases on Google Glass".
The talk provides insights and lessons learned from innovative experiments for building innovative services for Google Glass for capturing financial data picture and for mobile payments.  It also covers a number of Glass Use-Cases as well as Glass Prototypes that we implemented across Intuit.  If you find the presentation slides below useful, then add a comment here or follow me @tasneemsayeed for future postings!


View more documents from tasneemsayeed.

Wednesday, July 10, 2013

Learning on Accessibility for the iOS Platform

According to Apple's Accessibility Guide for iOS, you should make your iPhone application accessible to VoiceOver users because:
  • It increases your user base. You've worked hard to create a great application; don’t miss the opportunity to make it available to even more users.
  • It allows people to use your application without seeing the screen. Users with visual impairments can use your application with the help of VoiceOver.
  • It helps you address accessibility guidelines. Various governing bodies create guidelines for accessibility and making your iPhone application accessible to VoiceOver users can help you meet them.
  • It's the right thing to do.
As of iOS 3.0, Apple has included the UI Accessibility programming interface, which is a lightweight API that helps an application provide all the information VoiceOver needs to describe the user interface and help visually impaired people use the application.
I had recently given a presentation on my Learnings on Accessibility for the iOS Platform at an internal event at Intuit, which I wanted to share with all of you.  It provides an overview on what it means to make an app accessible for the iOS platform. It also provides guidelines for making your iOS app accessible and includes an overview on the most common accessible attributes, traits and how to add Accessibility via interface builder as well as in code. It covers Accessibility Notifications, VoiceOver specific API, Accessibility Containers, and some of the best practices for Accessibility.
 
If you find the presentation helpful in making your iOS app accessible, feel free to send me a comment!  
Enjoy making your iOS app accessible!

Saturday, May 18, 2013

Google I/O 2013: An In-Depth Developer's Perspective


In comparison to last year's Google I/O featuring the spectacle augmented reality Google Glass  announcement via a breathtaking sky-diving presentation, this year's Google I/O may have seemed to lack luster with no new hardware.

Nonetheless, Google I/O kicked off a record 3.5 hours Keynote with a heavy focus on software and services. The Google I/O Keynote discussed services and feature upgrades for both Android and Chrome. If you were expecting a brand new Android phone or tablet announcement, then you may have been disappointed. However, a major takeaway was a unified user experience for Chrome and Android platforms through shared services.

According to Google, Android was at 100 million activations in 2011. And, it was at 400 million in 2012.  And now in 2013, Google released the numbers for the Android activations at an incredible 900 million!   Now that's no small feat...

A huge applause from the audience came when Google mentioned the introduction of Android Studio IDE based on IntelliJ!  The Android Studio IDE tool has more options for Android development making the process faster and more productive. A live layout was shown that renders your app while you are editing it in realtime. It allows you to see a variety of device layouts (i.e. different form factor phone devices and tablets). Refer to TechCrunch's excellent review on Google Launches Android Studio and New Features for Developer Console Including Beta Releases and Staged Layouts for further details.

There were several other significant announcements made at Google I/O including:

  • Google Maps gets a complete overhaul and is released as part of Google Play Services.  Three new Location APIs introduced include:
    • Fused Location Provider API utilizes all of the communications sensors in the phone including WiFi, GPS and Cell network while significantly saving battery life. This is a new service that greatly improves any application that uses location services.
    • Geofencing API allows apps to inform the user entering or exiting a configured virtual fence. The API allows each app to define up to 100 geofences simultaneously. Apps utilizing this service will provide better battery life and performance.
    • Activity Recognition API utilizes the device capabilities of the hardware and machine learning to determine whether the user is walking, cycling or driving. This allows apps utilizing the service to adjust their behavior depending on the user's mode of transport. It is done in a very battery efficient way as no GPS is required.
  • Google+ Single Sign-in API is a cross-platform Sign-in API.  Fancy was used to demo during the Keynote that if you find a cool website that you like, you can sign-in with Google+ and then you will automatically be asked if you would like to install it. If you indicate yes, the the app will download and log you in automatically on your current device and other Android devices on your account - now that's very cool!
  • Google Cloud Messaging (GCM) for Android gets an overhaul. GCM is a service that allows you to send data from your server to your users' Android-powered device, and also to receive messages from devices on the same connection. The new features to GCM include:
    • Faster, easier GCM setup
    • Upstream messaging over XMPP. GCM's Cloud Connection Service (CCS) lets you communicate with Android devices over a persistent XMPP connection. The primary advantage of CCS are speed and the ability to receive upstream messages (i.e. messages from a device to the Cloud).
    • New API for Synchronizing Notifications. Maps a single user to a notification key, which you can then use to send a single message to multiple devices owned by the user.
  • Google Play Music All Access was launched in the US at I/O. It allows you to buy a music subscription from Google and costs $9.99/month and comes with a 30 day free trial. It allows you to explore millions of music tracks so it seems great for music discovery than its competitors (i.e. Spotify, Pandora). It is a simpler on-demand-meets-radio service with various personalization features, and works on phones, tablets and web browsers. From a design and UX perspective, it makes it easy to switch between the hands-on and hands-off experiences. If you are not sure of what you want to listen, you can just hit "Listen Now" and start listening to something right away. And, when you want to search on Google's huge on-demand catalog, you have that option as well.
  • Google Search. Google I/O 2013 marked the "end of search as we know it". Google announced that it is looking to change the way users go about finding information by expanding its voice search capabilities and through its various services such as Google's Knowledge Graph and Google Now with a credo the company has labeled "answer", "converse" and "anticipate". In other words, the company's core product will eventually respond better to naturally phrased questions. Anticipate means that Google Search will be able to guess what information users need the most and provide it for them easily via its Google Now service. The Google Now service pulls information from across Google services to act as a personal assistant of sorts by offering information on users' commutes, appointments and news from their favorite sources in the same place. Conversational Search was announced to be coming to all desktops and laptops via Chrome.
  • Chrome. Google announced that today there are 750+ Million active users of Chrome and that Chrome is increasingly used on mobile. Chrome design goals are: speed, simplicity and security. Google's Sundar Pichai said, "The same capabilities that you're used to using for Chrome on a desktop are going to be coming to Chrome on Android." Thanks to WebGL and Web Audio APIs, you will start seeing quite impressive web experiences including games and rich interactive environments that were typically limited to the desktop environment.
    • Better Web Imaging. Google compared JPEG vs WebP images. The quality of the image was indistinguishable, but it was about two-thirds in size (i.e. 31% reduction in file size). This will significantly improve the load times on websites as well as help users to not exceed their data plan limitations. Furthermore, WebP supports animated images as well.
    • Better Video compression.  Google compared H.264 vs VP9. It was noted that the quality of the VP9 video is the same as H.264. However, the size of VP9 comes in at less than half the size (i.e. 63% reduction in file size).
  • mCommerce. Google reported that when it comes to shopping on your phone, the percentage of people that come to the purchase screen and then get out from there is incredibly high in the 90%. Google made 3 key announcements around mCommerce:
    • Consumer launch for pay by Gmail which is rolling out slowly with initial rollout in US
    • Two new APIs announced for developer launch:
      • Google Wallet Object APIs. The vision is to digitize whole Wallet by allowing insertion of any kind of Objects into Google Wallet
      • Google Wallet Instant Buy for merchants selling physical goods. The goal is to allow consumers to make purchases within 2 clicks thereby improving the user experience.
  • Google+  Google introduced 41 new Google+ features at I/O!  First of all, Google+ gets an overhaul on its design. It is a multi-column design where its width will scale depending on the device that the user is on. It includes animations, flip and fade. It can do image analysis, recognize the image and hashtag it automatically.
  • Google Hangouts is now a standalone app that will work on the web desktop, Android and iOS all announced starting at I/O. This will provide an on-going conversation within the hangout and doesn't end when you sign off. Also, RealTime Communication, that is, group video is available at no charge.
  • Google Photos: Google announced earlier that in addition to unlimited backup of all of your standard sized photos, it will now give you 15 GB of storage for full-sized images. Google also announced machine learning algorithms that help to create a highlight reel for all your photos. The service will check all your photos in the album for blurriness, smiles and several other criteria (learnt by hundreds of actual human photographers, and produces a highlight reel.) Google also introduced an "auto-enhance" feature that will instantly adjust tonal distribution, red-eye reduction, skin softening, noise reduction and several other criteria to automatically enhance the picture. And, finally, Google introduced a new "Auto Awesome" mode. That is, if you take a burst of photos, it will make an animated gif out of them. If you take a series of screenshots, and if some are dark or if someone is smiling in one, but not the other, it can make a composite image similar to what the Galaxy S4 can do without the user having to activate that setting. And, that sounds cool!  It can also handle screenshots in "Motion", HDR, and panoramic photos.
And, no developer review would be complete without the mention of the Developer Sandbox, which occupied 2 floors, with dedicated ones for Android, Wallet, Chrome, standalone ChromeBook Pixel display, Google+, Photos and Google Play. And, not to forget Google Glass had an extra large sandbox which was often crowded with spectators!  Google started shipping an early version of Google Glass known as Explorer mostly to a couple of thousand developers who had requested them at last year's Google I/O conference - forked over a hefty price of $1500!  
I was fortunate to explore Google Glass and the experience was truly amazing! Glass is not like wearing a laptop on your face, though it does have a 12 GB of usable storage (16 GB total), synced with Google Cloud. Glass is more like augmented reality spectacles, but without lenses. Its lightweight frame rests on the ears and nose, suspending a small prism at the upper right corner of the wearer's field of vision. Glass has a tiny touch pad built into one earpiece and a microphone to pick up voice commands. Interestingly, the earpiece uses "bone conduction" to deliver sounds by vibration against the wearer's head. You can speak aloud to get information  or tell Glass to "take a picture" or "take a video" (up to 10s  snippets). You can ask questions like "Google find me a restaurant" and see the results in "Knowledge cards".You can also send texts and make phone calls. If you swipe toward the back, you can see Google Now cards for flight, traffic and other information.  In summary, Google I/O 2013 showcased several interesting innovations across Google products.

If you found this article useful or want to share your developer perspectives on Google I/O 2013, feel free to share your comments below, follow me by clicking on the upper left hand corner and/or follow me on Twitter @tasneemsayeed.

Tuesday, April 2, 2013

Best Practices for Mobile App Development on Android

Designing and building apps that look great and perform well on as many devices ranging from smart phones to tablets is crucial to ensure an optimal user experience.
At the recent DevFest Silicon Valley event held at Google on March 15th, I had presented a talk on Best Practices for Mobile App Development on Android.  The talk focuses on the Golden Rules & Best Practices of Performance including how to keep your apps responsive, how to effectively implement Background Services, tips for improving the performance and scalability of long-running applications, and briefly on Best Practices for User Experience and concluding with Benefits of Intents and Intent Filters.  For further details, please refer to the slides attached below.


If you enjoyed this article, you may want to follow me (@tasneemsayeed) on Twitter. I always announce significant new blog posts and interesting mobile topics via a tweet. You can also subscribe to this blog.  Feel free to post any comments that you may have below.
View more documents from tasneemsayeed.

Friday, March 8, 2013

Implementing Singletons for the iOS platform

The Singleton design pattern is one of the most frequently used design pattern when developing for the iOS platform. It is a very powerful way to share data across different parts of an iOS application without having to explicitly pass the data around manually.

Overview

Singleton classes play an important role in iOS as they exhibit an extremely useful design pattern.  Within the iOS SDK, the UIApplication class has a method called sharedApplication which when called from anywhere will return the UIApplication instance that is associated with the currently running application.

How to implement the Singleton Class

You can implement the Singleton class in Objective-C as follows:

 MyManager.h


@interface MySingletonManager : NSObject {
    NSString *someProperty;
}


@property (nonatomic, retain) NSString *someProperty;

+ (id)sharedManager;

@end

MyManager.m

#import "MySingletonManager.h"

@implementation MySingletonManager

@synthesize someProperty;

#pragma mark Singleton Methods

+(id) sharedManager {
    static MySingletonManager *sharedMySingletonManager = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        sharedMySingletonManager = [[self alloc] init];

    });
    return sharedMySingletonManager;
}
                  
- (id)init {
    if (self = [super init]) {
        someProperty = @"Default Property";
    }
    return self;
}

- (void)dealloc {
    // should never be called, but included here for clarity
}

@end

The above code fragment defines a static variable called sharedMySingletonManager which is then initialized once and only once in sharedManager.  The way that we ensure that it is only created once is by using the dispatch_once method from the Grand Central Dispatch (GCD).  This is thread safe and handled entirely by the OS so you do not need to worry about it at all.

If you rather not use GCD, then you can the following code fragment for sharedManager:

Non-GCD Based 

+ (id)sharedManager {
    @synchronized(self) {
        if (sharedMySingletonManager == nil)
            sharedMySingletonManager = [[self alloc] init];
    }
    return sharedMySingletonManager;
} 
 
Then, you can reference the Singleton from anywhere by calling the function below:

Referencing the Singleton

MySingletonManager *sharedManager = [MySingletonManager sharedManager];
Happy Singleton'ing!   If you find this post useful, then mention me in the comments.

View more documents from tasneemsayeed.