Posts Tagged ‘iPhone’
Supporting Dynamic Text in your iOS apps
I recently gave a talk on Dynamic Fonts at Cocoaheads meeting. You can download the presentation and sample code from here.
iOS7 introduced a cool feature called Dynamic Fonts. What it essentially allows you to do is to set your preferred reading text size in Settings (General or Accessibility) app of your phone and voilà, all apps that support dynamic type automatically adjust to display text content according to the preferred size setting
In this post, I’ll go over what you would need to do to support dynamic type within your apps so that yours will be one of the cool apps that reacts to the preferred text size change!
UITextKit, Text Styles and Font Descriptors
iOS7 introduced UITextKit – a powerful framework that allows you to support rich text content, layouts and without the complexities of drawing it with Core Text or having to use UIWebView.
An important component of UITextKit is TextStyles. Text Styles describe an intended use of a font .
The table below shows the list of supported text styles and the font descriptors that define them for the case when we set the preferred text size slider settings to the center.
Text Styles
|
Font Descriptor
|
UIFontTextStyleHeadline
(headings) |
NSCTFontUIUsageAttribute = UICTFontTextStyleHeadline;
NSFontNameAttribute = ".AppleSystemUIHeadline"; NSFontSizeAttribute = 17 |
UIFontTextStyleSubheadline (more…)
|
Functional Testing is system level , end-to-end testing of your app from a user's perspective. Automating the process of functional testing of your app offers several benefits – it saves time and effort, it's repeatable, simplifies regression testing, enables testing with large data sets and it can be tied into your Continuous Integration process. UI Automations is an automated functional test framework from Apple. Here, user interactions are driven by test scripts are written in JavaScript and executed using the Instruments Automation tool.
While there are several other automated functional test tools available including Calabash, KIF, Frank and Fone Monkey, UI Automations has the benefit that it is very simple to use, needs minimal or no changes to your existing app and since it's from Apple, it is (fairly) well supported and maintained.
I recently gave a talk on automated functional testing of iOS apps using UI Automations. You can download the presentation from this link. There is also a sample demo application with a corresponding set of UI Automation test scripts.
My Talk on “Intro to iOS Development” at ASEI
Today, I gave an introductory level technical talk about developing mobile apps for iOS platform at the American Society Of Engineers Of Indian Origin (ASEI), MI. The ASEI is "a two level (National and Local) non-profit organization of engineers and technical professionals of Indian origin". You can learn about them at http://www.aseimi.org. I was aware of the group but I had never attended any of their meetings, so I was not sure what to expect.
Right after work, I made the 45 mile drive to the ASEI meeting. Fighting the evening rush hour traffic, I reached there on the nick of time ; I probably made the organizers quite nervous!
There was a pretty good turn out. These were people, who just like me, had driven in from work and who probably had ten other places they'd rather be on a fall evening with picture perfect weather. I had to ensure that my talk was well worth their evening.
Soon after the featured mobile app presentation, I got started. I surveyed the room and learnt that there were less than five developers in the room. The rest of the audience was a mix of people with diverse backgrounds (different industries , different roles, varying demographics, a few were not even iPhone users).
My presentation was intended to be fairly technical , so my challenge was to make it appeal to the diverse audience. Although they were all not developers, I knew they all had one thing in common – they were very keen on learning more about iOS mobile development. I knew that was a start.
So for the next hour or so, I quickly moved through my slides. I had material for couple of hours but I tried to focus on material that would broadly appeal. Then the questions started pouring in and they were all very relevant. People were paying attention (well- at least most of them were) and it was interesting to see different perspectives.
I left the meeting with a greater sense of community.
You can download my presentation from here. It is intended to be a primer to the iOS platform and developing apps for it .
1 2 3 |
NSString *jsForTextSize = [[NSString alloc] initWithFormat:@"document.getElementsByTagName('body')[0].style. webkitTextSizeAdjust= '%d%%'", updatedFontSize*100/DEFAULTWEBVIEWFONTSIZE]; [self.myWebView stringByEvaluatingJavaScriptFromString:jsForTextSize]; |
In the JS above, DEFAULTWEBVIEWFONTSIZE refers to the default font size of the text content presented within the web view and updatedFontSize refers to the desired font size. So for example, if DEFAULTWEBVIEWFONTSIZE is 18 and updatedFontSize is 9, then updatedFontSize*100/DEFAULTWEBVIEWFONTSIZE evaluates to 50.
• Next step is to adjust the frame height of the web view so the scaled text content is visible. The simplest way to do this is to have the UIWebView as a subview of a scrollable view (UIScrollView or its subclass ). That way, you can adjust the web view frame height and correspondingly, the content size of the scroll view that encompasses it. Adjusting the web view height is a bit tricky as described below.
The sizeThatFits method on web view returns a size that best fits the content. The problem with this is that when you scale up, the method returns the updated size but if the web view is large enough to display the specified content, then scaling down will not update the frame size but instead, the current frame size is returned.
So first , reset the current frame height of the web view to a small value like 1.
1 2 3 |
CGRectadjustedFrame = self.myWebView.frame; adjustedFrame.size.height= 1; self.myWebView.frame= adjustedFrame; |
Now, obtain the frame size that would best fit the content. Since the current frame size is 1, we are scaling up. Update the frame size of the web view with the new frame size.
1 2 3 |
<span style="font-size: 14px;"><code> CGSizeframeSize = [self.myWebViewsizeThatFits:CGSizeZero]; adjustedFrame.size.height= frameSize.height; self.myWebView.frame= adjustedFrame;</code></span> |
Finally, update the content size of the UIScrollView containing the web view to accommodate the scaled content.
1 2 3 4 |
<span style="font-size: 14px;"> <code> CGSizescrollViewSize = self.myScrollView.contentSize; scrollViewSize.height= adjustedFrame.size.height+ self.myWebView.frame.origin.y; self.myScrollView.contentSize= scrollViewSize; </code> </span> |
You can download a sample project that scales the text content of web view from here. Shown below are some screenshots of the app.
Xcode and Zombie Processes
The Problem…
If you have been using Xcode (the latest version as of writing this post is Xcode 4.6.2 ) for an extended period of time, testing your app on the iOS simulator, you may eventually encounter a “Resource temporarily Unavailable” build error . There are no build errors associated with your source code but the system is unable to successfully build your app and launch the simulator to run it . You would observe something like this in your build output.
So what’s going on?
The reason this occurs is because every time you launch the iOS simulator through Xcode to run your app and then quit /stop running the app, Xcode leaves behind a Zombie process. If you are not familiar with Zombie processes in Unix, essentially, it is a process that has completed execution but whose entry remains in the process table. It is the responsibility of the parent process to eventually clear these processes. The zombies don’t use any of the computer resources so you won’t observe a depletion of resources , but the problem is that in this “undead” state, they hold on the PID or Process Identifier. The implication of this is that eventually, your system will run out of PIDs to assign to new processes, thereby resulting in a failure to spawn or launch a new process.
You can confirm this behavior by running your iOS app through Xcode a few times and then running the “ps” command on the terminal window. You will observe a bunch of zombie processes listed for your app. The “Z” in the “S” (or “STAT” ) column indicates that “symbolic state” of the process is a “Zombie“.
1 |
Priya-Mac-01:$ ps -aelf</code> <code>UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD STIME</code> <code>…….</code> <code> 501 928 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Fri11AM</code> <code> 501 1072 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Fri11AM</code> <code> 501 9473 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Fri05PM</code> <code> 501 11380 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon09AM</code> <code> 501 11599 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon09AM</code> <code> 501 11614 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 11758 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 12412 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 12564 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 13245 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 13407 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon11AM</code> <code> 501 13590 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 13725 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) 19Jun13</code> <code> 501 14545 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon12PM</code> <code> 501 14646 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon12PM</code> <code> 501 14761 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon12PM</code> <code> 501 14835 233 6000 0 0 0 0 0 - Z 0 ?? 0:00.00 (MyApp) Mon12PM</code> <code>-------- |
1 2 3 |
Priya-Mac-01:$ ps -aelf | grep "MyApp"|wc -l 272 |
In my case, there were 272 zombie processes associated with my app that Xcode didn’t reclaim. So in case of the Xcode, you will eventually notice that you are no longer able to build the app and launch the simulator to run it. In fact, you probably won’t be able to launch any new application. Yep- not a good place to be.
So what are your options?
Reboot:
The simplest and safest method is to reboot your system. This will get rid of the zombie processes
Re-initializing/Killing the Parent Process:
Generally , killing the parent process corresponding to the Zombie processes should take care of it but unfortunately, in the case of Xcode, the parent process is the system launchd process. The launchd is a core system process that handles the launching of many of the other processes. Issuing a “kill” command to the launchd process can result in undesirable results and can even make your system unresponsive. So DO NOT kill the launchd process. You could try to re-initialize the process using the kill with HUP (“Hang Up”) option but you are probably better off rebooting your Mac.
If you are curious, you can follow the steps below to determine the parent process of the Zombie Xcode process
1) You can identify the PID (“ppid”) of the parent process corresponding to the zombie process using the command
1 |
Priya-Mac-01:$ps -p <PID of MyApp Zombie Process> -O ppid |
This will output the ppid of the parent process corresponding to the Zombie process,
2) You can get details of the parent process using the following command
1 2 3 4 5 |
Priya-Mac-01:ps -p <PID of Parent Process> PID TTY TIME CMD 236 ?? 0:00.75 /sbin/launchd |
The output of the above command indicates that the launchd process is parent process.
You can find more details on Zombie processes at you can check out the details in http://en.wikipedia.org/wiki/Zombie_process.
SSH into your jailbroken iDevice without a password
If you are developing for/on a jailbroken iPhone or iPad you are more than likely going to have to SSH into your iDevice a number of times. This includes transferring files to/from the device via SCP. Entering a password every time you have to SSH into the device is very tedious. Moreover, this becomes imperative if you need automation scripts to SSH/SCP into the device
This post explains how you can enable public-key authentication with SSH in order to bypass the password entry process. Note that enabling password-less entry into your iDevice is a potential security risk because anyone with access to your system can now access/control your device without any authentication. So if you enable this, be sure to secure access to your systems!
The steps to enable public-key authentication with the iPhone/iPad are no different than with any UNIX system.
The following commands need to be executed on the system from which you would be SSHing into your iPhone/iPad.
If you are using a Mac or a Linux system, the commands are executed from the terminal window. If you are using a Windows PC, you would have to run these commands within Cygwin.
- Go to the .ssh folder
MyMacBook-Pro-2:~.mactester$ cd ~/.ssh
- Generate public/private key-pair by running the ssh-keygen command. You will be prompted for some information. You can leave the file to save the key as default. Enter a passphrase . You will be prompted for the passphrase when you try to access your key.
MyMacBook-Pro-2:.ssh mactester$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/Users/mactester/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/mactester/.ssh/id_dsa.
Your public key has been saved in /Users/mactester/.ssh/id_dsa.pub.
- A public/private key pair would have been generated in the .ssh folder. The .pub file corresponds to the public key.
MyMacBook-Pro-2:.ssh mactester$ ls
id_dsa id_dsa.pub
-
Copy the PUBLIC KEY over to the ~/.ssh folder of your iPhone/iPad (in this example, the IPAddress of my device is 192.168.1.10)
MyMacBook-Pro-2:.ssh mactester$ scp id_dsa.pub root@192.168.1.10:~/.ssh
The following commands need to be executed on your iPhone/iPad.
For this, you can SSH into the iDevice (You would still be prompted for a password at this stage) or you can type in the following commands directly in the terminal application window of your jailbroken iDevice
- Save the public key as “authorized_keys”. If you already have public keys associated with other systems stored on your device, be sure to append the public key to “authorized_keys2” as shown in the example below. Make sure you set the right access permissions on the key.
MyiPhone:~root# cd ~/.ssh
MyiPhone:~/.ssh root# cat id_dsa.pub >> authorized_keys2
MyiPhone:~/.ssh root#chmod 0600 authorized_keys2
That’s it. The next time you SSH into your iDevice, you will not be prompted for a password.
iOS devices support the delivery of multimedia content via HTTP progressive download or HTTP Live streaming . As per Apple's guidelines to App developers, "If your app delivers video over cellular networks, and the video exceeds either 10 minutes duration or 5 MB of data in a five minute period, you are required to use HTTP Live Streaming. (Progressive download may be used for smaller clips.)".
This article discusses a method for generating HTTP live streaming content using freely available tools. If your needs are large scale, then you may need to explore commercial encoders such as the Sorenson Squeeze 7 or other commercial video platforms. This article does not describe commercial video platforms.
HTTP Live Streaming – A (Very) Brief Overview:
In HTTP Live streaming, the multimedia stream is segmented into continuous media segments wherein each media chunk/segment holds enough information that would enable decoding of the segment. A playlist file is a list of media URIs, with each URI pointing to a media segment. The media URI’s are specified in the order of playback and the duration of the playlist file is the sum of the durations of the segments. Media playlist files have an “.m3u8” extension. The servers host the media segments and the playlist file. In order to playback content, the media streaming app on the device fetches the playlist file and the media segments based on the media URIs specified in the playlist . The transport protocol is HTTP.
Now, you can have multiple encodings/renditions of the same multimedia content. In this case, you can specify a variant playlist file that will contain URIs to the playlist files corresponding to each rendition of content. In this case, the iDevice can switch between the various encodings thereby adapting to changing network bandwidth conditions. HTTP Live Streaming is essentially a form of adaptive streaming. You can get more details from the IETF I-D available here. Other well-known adaptive streaming protocols include that of Microsoft’s Smooth Streaming , Adobe’s Dynamic Streaming and the DASH standards specification .
Encoding the content using Handbrake:
-
Among the free tools, I’ve found Handbrake to be the best in terms of performance and supported formats. Versions are available for Windows and the Mac. In my experience, the Mac version stalled a few times during encoding and at times hogging all the CPU cores on my Macbook Pro. The Windows version worked flawlessly.
-
Use the following encoding guidelines provided by Apple to encode your content. If you expect your app users to be able to access the content under a variety of network conditions (wifi, cellular etc), you would want to support multiple encodings of the content .
The Tools for generating content for HTTP Live Streaming :
Once you have encoded your content, you would have to prepare it for delivery via HTTP live streaming. There are a command line utilities available for the Mac that can be downloaded for free from http://connect.apple.com/ (You would need an Apple developer Id for installing the tools, which again is free). You would need the following tools –
-
mediafilesegmenter
-
variantplaylistcreator
Once installed, they would be available in/usr/bin directory of your Mac.
Segmenting your encoded content:
Use the mediafilesegmenter tool to segment the encoded media files. You would need to be “root” user in order to run the tool.
/usr/bin/mediafilesegmenter [-b | -base-url <url>]
[-t | -target-duration duration]
[-f | -file-base path] [-i | -index-file fileName]
[-I | -generate-variant-plist]
[-B | -base-media-file-name name] [-v | -version]
[-k | -encrypt-key file-or-path]
[-K | -encrypt-key-url <url>]
[-J | -encrypt-iv [random | sequence]]
[-key-rotation-period period]
[-n | -base-encrypt-key-name name]
[-encrypt-rotate-iv-mbytes numberMBytes]
[-l | -log-file file] [-F | -meta-file file]
[-y | -meta-type [picture | text | id3]]
[-M | -meta-macro-file file]
[-x | -floating-point-duration] [-q | -quiet]
[-a | -audio-only] [-V | -validate-files] [file]
- Open a terminal window on your Mac.
- Type “man mediafilesegmenter”at the command link prompt to get a full description of the usage of the tool.
<command prompt>$ man mediafilesegmenter
- An example of using the tool to segment a media file named “mymedia_hi.mp4” is as follows-
<command prompt>$
sudo /usr/bin/mediafilesegmenter -I -f mymedia_hi -f mymedia_hi.mp4
You will be prompted for the root password (which you must provide)
In the example, the media file “mymedia_hi.mp4” is assumed to be present in the current directory from which the command is executed. Otherwise, be sure to specify the path to the media file. The segments will be generated in a subfolder named “mymedia_hi” within the current directory.
<command prompt>$ cd mymedia_hi
<command prompt>$ ls
<command prompt>$
fileSequence0.ts fileSequence14.ts fileSequence6.ts
fileSequence1.ts fileSequence15.ts fileSequence7.ts
fileSequence10.ts fileSequence2.ts fileSequence8.ts
fileSequence11.ts fileSequence3.ts fileSequence9.ts
fileSequence12.ts fileSequence4.ts prog_index.m3u8
fileSequence13.ts fileSequence5.ts
The fileSequence*.ts files correspond to the media segments which are MEPG2 transport streams. The prog_index.m3u8 is the playlist file corresponding to the segmented media file and specifies the media URIs to the segments.
- The “-I” option that I specified in the example command will generate a variant plist file. The variant plist file would be subsequently required to generate the variant play list file as described in the next step. You don’t need this option if you don’t plan on streaming multiple encodings of your content.
Assuming you are in the “mymedia_hi” folder, type the following to get the list of generated variant plist files.
<command prompt>$ cd ..
<command prompt>$ ls *.plist
<command prompt>$
mymedia_hi.plist
Generating variant playlist file :
If you do not intend to stream multiple renditions of the content , you can skip this step and proceed to the “Streaming the Content” section.
-
Follow the procedures specified in “Segmenting your encoded content” section to segment every encoding of the media content that you wish to stream. For example, if you plan on supporting three renditions of your media content for high, medium and low network bandwidth conditions respectively, you will have to use the “mediafilesegmenter” tool to segment each of those renditions.
-
Use the tool variantplaylistcreator for generating the variant playlist file.
/usr/bin/ variantplaylistcreator [-c | -codecs-tags] [-o | -output-file fileName]
[-v | -version] [<url> <variant.plist> ...]
-
Type “man variantplaylistcreator” at the command link prompt to get a full description of the usage of the tool.
<command prompt>$ man variantplaylistcreator
-
An example of using the tool to generate a variant playlist file named “mymedia_all.m3u8” is as follows-
<command prompt>$ sudo /usr/bin/variantplaylistcreator -o mymedia_all.m3u8 http://mywebserver/mymedia_lo/prog_index.m3u8 mymedia_lo.plist http://mywebserver/mymedia_med/prog_index.m3u8 mymedia_med.plist http://mywebserver/mymedia_hi/prog_index.m3u8 mymedia_hi.plist
You will be prompted for the root password (which you must provide)
The URL that is associated with the prog_index.m3u8 files for each encoding corresponds to the URL of the webserver that will be used for hosting/streaming the media content.
-
The generated variant play list file “mymedia_all.m3u8” will specify the URIs to playlist files (prog_index.m3u8) corresponding to each encoding of the media file. The contents of the file (viewable using your favorite text editor) should be something like this
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=87872
http://mywebserver/mymedia_lo/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=100330
http://mywebserver/mymedia_med/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1033895
http://mywebserver/mymedia_hi/prog_index.m3u8
Streaming the Content:
-
Upload the variant playlist file (mymedia_all.m3u8 in our example) and ALL the sub-folders that contain the generated segments for every rendition of the media file to the web server that will stream the content. You do not have to copy the media files (eg .mp4 files) from which the segments were generated. You could host it on any regular web server such as the Apache Webserver or Microsoft's IIS. For instance, on an Apache server on Windows, you would copy all this to the “htdocs” folder. This would be something like this- c:\Program Files(x86)\Apache Software Foundation\Apache2.2\htdocs.
-
Typically you would need to make no changes to the web server in order to stream the content. In some cases, you may need to update the webserver configuration file to add support for the .m3u8 MIME type . I had to do this for my IIS web server where I associated the .m3u8 extension with “application/octet-stream” MIME type.
Note: If your needs are large scale, you would employ the services of a Content Distribution Network (CDN) such as Akamai to publish and distribute your content.
Accessing the Content from your iDevice:
-
In order to access the streaming content on your iDevice, the media URL must point to the appropriate playlist (.m3u8) file. This would correspond to either the Variant playlist file if you support multiple encodings of the content (mymedia_all.m3u8 in our example) or the prog_index.m3u8 playlist file for the specific encoding of the content.
Example: http://<webserver>/<path to the .m3u8 playlist file>
If you do not have a streaming app, you can open the URL in the Safari browser on your device. If everything goes well, the stream should start playing on your device.
Like the million plus who pre-ordered their iPhone4S, I anxiously waited for my shiny new phone. And like most of them, I was most curious about Siri- The smart new voice recognition system unique to the iPhone4S. I’ve not had much luck with voice recognition/voice activation systems in the past- Even the rudimentary voice-control systems like a voice-activated dialer have been quite frustrating to say the least. I was interested in how well Siri would fare with my accent (quite frankly, I don’t know what accent I have- It’s a blend) and how fast Siri would respond to my commands. Also, if Siri cares about bad (English) grammar (not that I have it!).
Voice and Natural Language processing is highly resource-intensive and the voice recognition features found on resource constrained devices rely on servers in the cloud to do the processing. This means that there is network latency to be factored into the response times. So I was curious to see if Siri works in offline mode for if it did, that would be stupendous!!
So when my phone arrived today, the Siri was pretty much the first feature I checked out.
OK- So not surprisingly, Siri requires an Internet connection. It sends your commands to servers in the cloud (iCloud?) to do the processing. That said, depending on the speed of the connection, the response times can vary.
And it responds in a woman's voice. I'm not sure why thats the case, but most voice command systems , including my GPS do. (Update: I recently found this article that might have some answers on this topic- http://edition.cnn.com/2011/10/21/tech/innovation/female-computer-voices/)
First, I started off with the “typical” questions. Here are a few of them –
Me: “I am thinking of lunch at 12:00 today
Siri: I found 16 restaurants whose reviews mention lunch. …14 of them are fairly close to you.
Me: “I have to remember to pick up my daughter at 5:30PM today”
Siri: What time shall I remind you?
Me: 5:30 PM
Siri: Shall I set the reminder for 5:30 today?
Me: What are new movie releases in the US?
Siri: I found a number of movie theaters in Washington
And it provided me with a list of movie theaters. But why Washington??That’s no where close to where I live!
Overall, it did a decent job of understanding my accent. In fact, it did something which most of my American friends never do- pronounce my husband's last name (17 letters long) correctly!! It won brownie points right there!
It was not perfect- it didn’t accurately recognize words a couple of times but still understood the intent which is probably the more important part.
The Artificial Intelligence system relies on certain keywords to accomplish its tasks- For instance, “Remind” or “Remember” for setting up reminders . So a command like “Ihave to pick up my daughter at 5:30 today” does not evoke a response to set up the reminder. Instead, it responds “I don’t know want you mean by “pick up my daughter at 5:30 today’”. Since it leverages the information about your contacts, location , address books etc to do the processing and customize the results, the accuracy of responses may vary.
Then as any normal person would do, for the fun of it, I decided test a slew of ambiguous or weird questions –
Me: Are you my friend?
Siri: No comment!
Me: I am bored. What should I do?
Siri: I’ll see what I can do to entertain you.
I then eagerly waited expecting it to for instance, provide me a list of movies or something- It didn’t do anything.
Me: I am sleepy
Siri: I hope you are not driving!
Siri is funny too!
My initial reaction is that I am quite impressed with Siri. It handles the routine commands very well. Unlike previous voice command systems, I will be using it often. Over time, I’ll find out if it learns and responds better. Not sure if it’s a good thing, but long term, I can see this becoming a companion to a loner or a sounding board of sorts. When the APIs to Siri go public, this can open up a whole range of new possibilities to the developer community. For example, an app that takes images of you in your new dress and responds to the question- “Does this dress make me look fat?” and pat response is “Yes Siree!!”