IBM Watson, Cognitive Computing & Speech APIs

IBM Watson is a cognitive computing platform that you can use to add intelligence and natural language analysis to your own applications.  Watson employs natural language processing, hypothesis generation, and dynamic learning to deliver solutions for natural language question and answer services, sentiment analysis, relationship extration, concept expansion, and language/translation services. ..and, it is available for you to check out with IBM Bluemix cloud services.

Watson won Jeopardy, tackles genetics,  creates recipes, and so much more.  It is breaking new ground on a daily basis.

The IBM Watson™ Question Answer (QA) service provides an API that give you the power of the IBM Watson cognitive computing system. With this service, you can connect to Watson, pose questions in natural language, and receive responses that you can use within your application.

In this post, I’ve hooked the Watson QA node.js starter project to the Web Speech API speech recognition and speech synthesis APIs. Using these APIs, you can now have a conversation with Watson. Ask any question about healthcare, and see what watson has to say. Check out the video below to see it in action.

You can check out a live demo at:

Just click on the microphone button, allow access to the system mic, and start talking.  Just a warning, lots of background noise might interfere with the API’s ability to recognize & generate a meaningful transcript.

This demo only supports Google Chrome only at the time of writing. You can check out where Web Speech is supported at caniuse.com.

You can check out the full source code for this sample on IBM Jazz Hub (git):

I basically just took the Watson QA Sample Application for Node.js and started playing around with it to see what I could do…

This demo uses the Watson For Healthcare data set, which contains information from HealthFinder.gov, the CDC, National Hear Lung, and Blood Institute, National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Neurological Disorders and Stroke, and Cancer.gov.  Just know that this is a beta service/data set – implementing Watson for your own enterprise solutions requires system training and algorithm development for Watson to be able to understand your data.

Using Watson with this dataset, you can ask conditional questions, like:

  • What is X?
  • What causes X?
  • What is the treatment for X?
  • What are the symptoms of X?
  • Am I at risk of X?

Procedure questions, like:

  • What should I expect before X?
  • What should I expect after X?

General health auestions, like:

  • What are the benefits of taking aspirin daily?
  • Why do I need to get shots?
  • How do I know if I have food poisoning?

Or, action-related questions, like:

  • How can I quit smoking?
  • What should I do if my child is obese?
  • What can I do to get more calcium?

Watson services are exposed through a RESTful API, and can easily be integrated into an existing application.  For example, here’s a snippet demonstrating how you can consume the Watson QA service inside of a Node.js app:

[js]var parts = url.parse(service_url +’/v1/question/healthcare’);
var options = {
host: parts.hostname,
port: parts.port,
path: parts.pathname,
method: ‘POST’,
headers: {
‘Content-Type’ :’application/json’,
‘Accept’:’application/json’,
‘X-synctimeout’ : ’30’,
‘Authorization’ : auth
}
};

// Create a request to POST to Watson
var watson_req = https.request(options, function(result) {
result.setEncoding(‘utf-8’);
var response_string = ”;

result.on(‘data’, function(chunk) {
response_string += chunk;
});

result.on(‘end’, function() {
var answers = JSON.parse(response_string)[0];
var response = extend({ ‘answers’: answers },req.body);
return res.render(‘response’, response);
});
});[/js]

Hooking into the Web Speech API is just as easy (assuming you’re using a browser that implements the Web Speech API – I built this demo using Chrome on OS X). On the client side, you just need need to create a SpeechRecognition instance, and add the appropriate event handlers.

[js]var recognition = new webkitSpeechRecognition();
recognition.continuous = true;
recognition.interimResults = true;

recognition.onstart = function() { … }
recognition.onresult = function(event) {

var result = event.results[event.results.length-1];
var transcript = result[0].transcript;

// then do something with the transcript
search( transcript );
};
recognition.onerror = function(event) { … }
recognition.onend = function() { … }[/js]

To make your app talk back to you (synthesize speech), you just need to create a new SpeechSynthesisUtterance object, and pass it into the window.speechSynthesis.speak() function. You can add event listeners to handle speech events, if needed.

[js]var msg = new SpeechSynthesisUtterance( tokens[i] );

msg.onstart = function (event) {
console.log(‘started speaking’);
};

msg.onend = function (event) {
console.log(‘stopped speaking’);
};

window.speechSynthesis.speak(msg);[/js]

Check out these articles on HTML5Rocks.com for more detail on Speech Recognition and Speech Synthesis.

Here are those links again…

You can get started with Watson services for Bluemix at https://console.ng.bluemix.net/#/store/cloudOEPaneId=store

So, What is IBM MobileFirst?

I’m still “the new guy” on the MobileFirst team here at IBM, and right away I’ve been asked by peers outside of IBM: “So, what exactly is MobileFirst/Worklight?  Is it just for hybrid apps?”

In this post I’ll try to shed some light on IBM MobileFirst, and for starters, it is a lot more than just hybrid apps.

MobileFirst-Logo

IBM MobileFirst Platform is a suite of products that enable you to efficiently build and deliver mobile applications for your enterprise, and is composed of three parts:

IBM MobileFirst Platform Foundation

IBM MobileFirst Platform Foundation (formerly known as Worklight Foundation) is a platform for building mobile applications for the enterprise.  It is a suite of tools and services available either on-premise or in the cloud, which enable you to rapidly build, administer, and monitor secure applications.

The MobileFirst Platform Foundation consists of:

  1. MobileFirst Server – the middleware tier that provides a gateway between back-end systems and services and the mobile client applications.  The server enables application authentication, data endpoints/services, data optimization and transformation, push notification management (streamlined API for all platforms), consolidated logging, and app/services analytics. For development purposes, the MobileFirst server is available as either part of the MobileFirst Studio (discussed below), or as command line tools.

  2. MobileFirst API – both client and server-side APIs for developing and managing your enterprise mobile applications.
    • The server-side API enables you to expose data adapters to your mobile applications – these adapters could be consuming data from SQL databases, REST or SOAP Services, or JMS data sources. The Server side API also provides a built-in security framework, unified push notifications (across multiple platforms), and data translation/transformation services. You can leverage the server-side API in JavaScript, or dig deeper and use the Java implementation.
    • The client-side API is available for native iOS (Objective-C), native Android (Java), J2ME, C# native Windows Phone (C#), and JavaScript for cross-platform hybrid OR mobile-web applications. For the native implementations, this includes user authentication, encrypted storage, push notifications, logging, geo-notifications, data access, and more.  For hybrid applications, it includes everything from the native API, plus cross-platform native UI components and platform specific application skinning.  With the hybrid development approach, you can even push updates to your applications that are live, out on devices, without having to push an update through an app store.  Does the hybrid approach leverage Apache Cordova?  YES.

  3. MobileFirst Studio – an optional all-inclusive development environment for developing enterprise apps on the MobileFirst platform.  This is based on the Eclipse platform, and includes an integrated server, development environment, facilities to create and test all data adapters/services, a browser-based hybrid app simulator, and the ability to generate platform-specific applications for deployment.  However, using the studio is not required! Try to convince a native iOS (Xcode) developer that they have to use Eclipse, and tell me how that goes for you… 🙂  If you don’t want to use the all-inclusive studio, no problem.  You can use the command line tools (CLI).  The CLI provides a command line interface for managing the MobileFirst server, creating data adapters, creating the encrypted JSON store, and more.

  4. MobileFirst Console – the console provides a dashboard and management portal for everything happening within your MobileFirst applications.  You can view which APIs and adapters have been deployed, set app notifications, manage or disable your apps, report on connected devices and platforms, monitor push notifications, view analytics information for all services and adapters exposed through the MobileFirst server, and manage remote collection of client app logs.  All together, an extremely powerful set of features for monitoring and managing your applications.

  5. MobileFirst Application Center – a tool to make sharing mobile apps easier within an organization.  Basically, it’s an app store for your enterprise.

MobileFirst Platform Application Scanning

MobileFirst Platform Application Scanning is set of tools that can scan your JavaScript, HTML, Objective-C, or Java code for security vulnerabilities and coding best practices.  Think of it as a security layer in your software development lifecycle.


MobileFirst Quality Assurance

MobileFirst Quality Assurance is a set of tools and features to help provide quality assurance to your mobile applications.  It includes automated crash analytics, user feedback and sentiment analysis, in-app bug reporting, over-the-air build distribution to testers, test/bug prioritization, and more.


So, is MobileFirst/Worklight just for hybrid (HTML/JS) apps? You tell me… if you need clarification more information, please re-read this post and follow all the links.  😉

 

IBM Worklight Powered Native Objective-C iOS Apps

IBM MobileFirst Foundation (also known as IBM Worklight) is a middleware solution for developing mobile applications. Out of the box, Worklight enables security and authentication, device management, encrypted storage, operational analytics, simple cross platform push notifications, remote logging, data access, and more…

Historically, most people think that Worklight is just for creating hybrid mobile (HTML-powered) applications. While this is one of the powerful workflows that Worklight enables, it’s also the proverbial “tip of the iceberg”. Worklight does a lot more than just provide the foundation for secure hybrid applications. Worklight also provides a secure foundation for native applications, and can be easily incorporated to any existing (or new) native app.

In this post, I’ve put together a video series to take a look at just how easy it is to setup Worklight in an existing iOS native application to provide remote log collection, application management, and more. Read on for complete detail, or check out the complete multi-video playlist here.

The existing app that I’ll be integrating is based on an open source Hacker News client, which I downloaded from GitHub. Check out the video below for a quick introduction to what we’ll be building.

If you want, you can leverage the Worklight IDE – a set of developer tools built on top of the Eclipse development environment. If you don’t want to use Eclipse, that’s OK too – Worklight can be installed and configured using the Worklight command line interface (CLI) and you can leverage any developer tools that you’d like to build your applications. Want to tie into Xcode ? No problem. I’ll be using the Worklight CLI to setup the development environment in this series.

Setting up the Worklight Server

The Worklight server is the backbone for providing Worklight features. App managment, remote logging, operational & network analtics, and all of the Worklight features require the server, so the first thing that you need to do is setup the server for our environment. Check out the next video, which will guide you through setting up a Worklight server/project using the CLI.

First things first, you have to start the Worklight server. If you haven’t already configured a Worklight server, run the create-server command to perform the initial setup – this only has to be run once ever.

[shell]wl create-server[/shell]

Now that the server is setup, we need to create a Worklight project. For this walkthrough, I’ll just call it “MyWorklightServer”:

[shell]wl create MyWorklightServer[/shell]

Next, go into the newly created project directory and go ahead and start it.

[shell]cd MyWorklightServer
wl start[/shell]

Once the server is started, add the iOS platform:

[shell]wl add api[/shell]

You will be prompted for an API name. This can be anything, but you should probably give it a meaningful name that identifies what the API will be used for. In this walkthrough I specify the name “MyiOSNativeAPI”.

01-Worklight-CLI

Next you will be prompted to select a platform, select “iOS”
Next, build the project, and then deploy it to the server:

[shell]wl build
wl deploy[/shell]

Next, launch the Worklight console to view that the project has been deployed and the native API has been created. The console will launch in the system web browser.

[shell]wl console[/shell]

02-Worklight-Console
Worklight Console

 

Be sure to check out the Worklight CLI documentation for complete detail on the CLI commands.

Xcode Integration

Next we need to setup the Xcode project to connect to the newly created Worklight server. If you’re adding Worklight to a new Xcode project, or an existing Xcode project, the preparation steps are the same:

  1. Add Worklight files to your Xcode project
  2. Add framework dependencies
  3. Add the -ObjC linker flag

This next video walks through configuration of your Xcode project and connecting to the Worklight server (which we will cover next):

In the Xcode project navigator, create a folder/group for the Worklight files, then right click or CTRL-click and select “Add Files to {your project}”…

03-add-files

Next, navigate to the newly created MyiOSNativeAPI folder and select the worklight.plist file and WorklightAPI folder, and click the “Add” button.

04-add-files-2

Once we’ve added the files to our Xcode project, we need to link the required framework/library dependencies:

  • CoreData.framework
  • CoreLocation.framework
  • MobileCoreServices.framework
  • Security.framework
  • SystemConfiguration.framework
  • libstdc++.6.dylib
  • libz.dylib

Next, In the Xcode project’s Build Settings search for “Other Linker Flags” and add the following linker flag: “-ObjC”.

05-linker-flag

If you’d like additional detail, don’t miss this tutorial/starter app by Carlos Santanahttps://github.com/csantanapr/wl-starter-ios-app

 Connecting to the Worklight server

Connecting to the Worklight server is just a few simple lines of code. You only have to implement the WLDelegate protocol, and call wlConnectWithDelegate, and the Worklight API handles the rest of the connection process. Check out the video below to walk through this process:

Implement the WLDelegate protocol:

[objc]//in the header implement the WLDelegate protocol
@interface MAMAppDelegate : UIResponder <UIApplicationDelegate,
WLDelegate> {

//in the implementation, add the protocol methods
-(void)onSuccess:(WLResponse *)response {
}
-(void)onFailure:(WLFailResponse *)response {
}[/objc]

Connect to the Worklight Server:

[objc][[WLClient sharedInstance] wlConnectWithDelegate: self];[/objc]

Next, go ahead and launch the app in the iOS Simulator.

You’re now connected to the Worklight server! At this point, you could leverage app management and analytics through the Worklight Console, or start introducing the OCLogger class to capture client side logging on the server.

App Administration via Worklight Console
App Administration via Worklight Console

 

Worklight Analytics Dashboard
Worklight Analytics Dashboard

 

Remote Collection of Client Logs

Once you’re connected to Worklight, you can start taking advantage of any features of the client or server side APIs. In this next video, we’ll walk through the process of adding remote collection of client app logs, which could be used for app instrumentation, or for debugging issues on remote devices.

On the server, you’ll need to add log profiles to enable the capture of information from the client machines.

Adding Log Profiles
Adding Log Profiles

On the client-side, we just need to use the OCLogger class to make logging statements. These statements will be output in the Xcode console for local debugging purposes. If a log profile has been configured on the server, these statements will also be sent to the Worklight server.

[objc]OCLogger *logger = [OCLogger getInstanceWithPackage:@"MyAppPackageName"];
[OCLogger setCapture:YES];
[OCLogger setAutoSendLogs:YES];

//now log something
[logger log:@"worklight connection success"];
[logger error:@"worklight connection failed %@", response.description];
[/objc]

For complete reference on client side logging, be sure to review the Client-side log capture API documentation.

App Management & Administration

Out of the box, Worklight also provides for hassle-free (and code-free) app management. This enables you to set notifications for Worklight client apps and disable apps (cutting off access to services and providing update URLs, etc..). This next video walks you through the basics of app management.

Be sure to check out the complete documentation for app management for complete details.

All done, right?

At this point, we’ve now successfully implemented Worklight server integration into a native iOS app. We have remote collection of client-side logs, we can leverage app management, and we can collect operational analytics (including platforms, active devices, and much more).

If you don’t want to leverage any more Worklight features, then by all means, ship it! However, Worklight still has a LOT more to offer.

Exposing Data Through Adapters

Worklight also has the ability to create adapters to expose your data to mobile clients. Adapters are written in JavaScript and run on the server. They help you speed up development, enhance security, transform data serialization formats, and more.

In the next two videos we will walk through the process of collecting information inside the native iOS application and pushing that into a Cloudant NoSQL data base (host via IBM Bluemix services).

Cloudant databases have a REST API of their own, so why use a Worklight Adapter? For starters, Worklight becomes your mobile gateway.  By funneling requests through Worklight, you are able to capture analytic data for every server invocation, and Worklight gives us the ability to control access to the enterprise.  Worklight gives you the capability to cut off mobile access to the backend at any time, just by changing the API status.

Now let’s take a look at the process for exposing the Cloudant database through a Worklight Adapter:

Once the data adapter has been configured, it is simple to invoke the adapter procedures to get data into or out of the your applications.

This next video covers the process of pushing data into the Cloudant database adapter from the native mobile client application:

Once again, you will have to implement the WLDelegate protocol to handle success/error conditions, and the procedure invocation is implemented using the WLClient invokeProcedure method:

[objc]NSMutableDictionary *userProfile = [[NSMutableDictionary alloc] init];
[userProfile setValue:self.email.text forKey:@"email"];
[userProfile setValue:self.firstName.text forKey:@"firstName"];
[userProfile setValue:self.lastName.text forKey:@"lastName"];

WLProcedureInvocationData * invocationData = [[WLProcedureInvocationData alloc] initWithAdapterName:@"CloudantAdapter" procedureName:@"putData"];
invocationData.parameters = @[userProfile];

[[OCLogger getInstanceWithPackage:@"UserProfile"] log:@"sending data to server"];
[[WLClient sharedInstance] invokeProcedure:invocationData withDelegate:self];[/objc]

It is as simple as that.

IBM MobileFirst Foundation (aka Worklight) is for more than just hybrid app development. It is a secure platform to streamline your enterprise application development processes, including everything from encryption, security and authentication, to operational analytics and logging, to push notifications, and much more, regardless of whether you’re targeting hybrid app development paradigms, or native iOS, native Android, or native Windows Phone projects.

Embarking Upon A New Adventure

I’m excited to finally announce that I have embarked upon a new adventure!  Today marks my first day as a MobileFirst Developer Advocate for IBM!

IBM

So, what does that mean?

It is very similar to what I was doing back at Adobe focusing on developers.  I’m excited to engage with the development community around building mobile apps and leveraging cloud services to meet critical business needs.  I’ll be focused on IBM’s MobileFirst platform, including Worklight – a platform for building and delivering mobile applications leveraging Apache Cordova (PhoneGap), and Bluemix – IBM’s scalable cloud computing platform, which can be used for everything from hosted services, “big data”, security, back-ends for mobile apps, Java, node.js, ruby, and much, much more… Seriously, check out everything that Bluemix has to offer.

It is my mission to help you, the developer or business decision maker be successful, and now I have access to IBM’s tools, knowledge and services to back me up!

Will I still be building apps and services?

  • YES – stay tuned for more info

Will I still be helping you build apps, and writing about development tools, paradigms, and best practices?

  • YES – it’s my mission to help you make the right decisions and be successful

Will I see you at the next development conference, hackathon, or meetup?

  • YES, and I can’t wait to show you everything IBM has to offer.  

Will I still be flying drones?

  • Of course! However, I won’t be blogging about drones and creative tools quite so much. Follow me on Flickr to see images from my latest flights, and feel free to ask me questions.

I had a great run with Adobe, and am thankful for all of the opportunities while there.  I worked on many amazing projects, worked with a lot of great (and very, very smart) people, and was able to continually push the envelope on both the development and creative/media sides. For which I am grateful.

Now, let the next adventure commence!

Your business has tough questions? Let’s ask Watson.

Lens Correction For GoPro Video Footage Has Never Been Easier!

I love my GoPro camera.  It takes amazing pictures and captures incredible videos, and can get into some extreme situations that other cameras probably would not survive – no wonder it is one of the best selling cameras in the world.  I also love the fisheye lens, but there are times when the fisheye effect is too much. We’ve had lens correction in Photoshop and Lightroom for a while, optics compensation in After Effects, but now it is easier than ever to non-destructively remove the fisheye effect from GoPro video footage directly inside of Adobe Premiere Pro.  Check out the video below to see it in action.

Applying lens correction (or lens distortion removal) is incredibly easy.  There are new effects presets in the effects panel that enable video editors to simply drag an effect onto their clip to have the lens correction applied.  Just select the preset for the resolution and field of view (FOV) that match what you used to capture your footage, and drag it right onto your clip.  They under Presets -> Lens Distortion Removal -> GoPro. For those fellow quadcopter enthusiasts, you may also notice some presets for the DJI Vision cameras!

gopro-lens-effect-preset
GoPro Lens Distortion Presets

Once you’ve applied the preset to your footage, you can tweak it as you like to customize the amount of correction.  You can under-correct, over-correct, or change the center/focal point of the correction.  I normally tend to leave it with the default settings…

gopro-lens-effects
GoPro Lens Distortion Effect Controls

Once you’ve applied the correct preset for your footage, you’ll be able to see that the lens distortion has been removed.  The straight lines will now appear straight, and everything will line up to scale.

lens-correction-in-Premiere
Lens Distortion Removal in Action

Now get out there and go capture some amazing footage of your own!

Salisbury Festival Time-lapse

Every year the town I live in has a weekend-long spring festival. There are rides for the kids, live music, beer, and lots of food. This year I have a great view overlooking the carnival area, so I decided to do a time-lapse video capturing all of the activity. The trucks pulled in before I got to the office on Thursday morning, but I managed to capture most of the set up, all the way until the trucks drove away on Sunday night.

I set up two GoPro cameras. One was a stock GoPro Hero 3+ Black edition capturing 7MP narrow FOV stills every 60 seconds. The other was a GoPro Hero 3 Black with a “flat” lens capturing 5MP stills every 60 seconds. Unfortunately the 3+ stopped recording after about 24 hours – I’m not sure if the camera over heated, had a bug in the firmware (I realized I’m 1 version back from the latest), or if my memory card had a corrupt sector. The image sequence for Thursday is from this camera. The backup camera kept running all 4 days and captured the entire festival.

Assembling this was simple – I imported the images as image sequences in Adobe Premiere, arranged them on the timeline, cut out the night sequences (there was almost no activity during them), added some transitions, titles, and color correction (contrast and saturation), then added some background music.  I added slow zooming and panning to each of the shots to add drama, which helped make things a lot more interesting.

Assembling A Panorama with Adobe Photoshop CC

I just put together a walkthrough of creating an aerial panorama from images captured with a GoPro camera and an RC helicopter over on Behance.  Check it out for full details.

Here’s the final panorama to whet your appetite:

Aerial Panorama
Aerial Panorama

High-res here: https://www.flickr.com/photos/andytrice/13899593912/

Enjoy!

3D Printing A Custom Belt Buckle With Adobe Photoshop

Ever since Photoshop introduced 3D printer support I’ve been hooked on 3D printing. Some of my recent experiments included text extrusions, phone cases, and a dragon. While these are cool, I still wanted to kick it up a notch or two. Those models were all printed using plastic materials, and I’ve been wanting to try out a few other other materials, in particular metal. So in honor of Photoshop World coming up next week, I decided to design and print a Photoshop themed belt buckle in raw stainless steel, and it turned out AMAZING! Far better than I had hoped.

I chose stainless steel because I wanted a minimalistic/industrial look for the belt buckle. The process was actually quite easy. I took a vector Photoshop logo in Illustrator, copied the PS and the square border, and pasted it into Photoshop. Then I extruded it into a 3D object, added a flattened cube on the back, and used cylinder and sphere primitives to create the belt loop and pin. Seriously, this was the complete process. I’m not over-simplifying things. Check out the video below to see a timelapse recreating this model entirely in Photoshop.

When you are creating 3D models, just be sure to pay attention to the object’s physical dimensions in the 3D Properties Coordinates panel.  Set the units to a physical unit of measurement (I chose Centimeters), and create your objects using the exact physical print size.  Also, it’s important to know the physical characteristics and limitations of your target materials, including minimum wall thickness, minimum wire thickness, embossing depth, clearance, etc… Design within these parameters for best results.

3D Properties - Coordinates
3D Properties – Coordinates

Once you’re ready to print, just select your print target and material in the 3D Print settings panel. Then send it to print using the 3D->3D Print… menu.  If you’re using the Shapeways 3D printing service you’ll then be redirected to upload your model and complete the print order.

3D Print Properties
3D Print Properties

A few days later, your print will arrive in your mailbox. However, do not forget to double check your print volume/dimensions after you’ve uploaded your STL to Shapeways. I noticed that some of the minimum thickness checks made the pin too thick, so I generated the STL file for a plastic material, then chose the metal material when actually ordering the print through Shapeways.

I’m really happy with how this turned out, and yes, I’m wearing this belt buckle right now.

3D Belt Buckle (1 of 2) 3D Belt Buckle (2 of 2)

All of this was created in it’s entirety using Creative Cloud. Join now if you haven’t already become a member! If you’re going to be at Photoshop World next week in Atlanta, stop by to check it out and learn more!

3D printing has the potential to completely change how people create physical objects. It enables faster prototyping and iteration in design and manufacturing, enables new forms of artistry and jewelry, and even has applications in medicine.  If you haven’t checked out 3D printing yet, you really should do yourself a favor and give it a few minutes of your time.

Aerial Photography with a GoPro Camera and Adobe Creative Cloud tools

My second article on aerial imaging with a remote controlled helicopter is now live in the March 2014 issue of Adobe Inspire!  The first article focused on aerial videography and Adobe video tools. This time it’s all about aerial photography with a GoPro camera and DJI Phantom (and how to bring these images to life with Photoshop and Lightroom).

You can read it on the web or download the FREE digital publication version to learn more. I HIGHLY recommend the digital publication version, which was created with Adobe Digital Publishing Suite.

Inspire

Be warned (AGAIN) – flying helicopters with cameras attached is highly addictive. You may easily become obsessed with the endless possibilities.

If you want to learn more, definitely do not miss the Top Gun Flight Training For Hobbyist Photographers workshop at the upcoming Photoshop World conference next month!

Here are just a few panoramic images I’ve captured over the last year with my copter. You can check out even more in my Flickr collection.

San Francisco at Sunrise
Richmond, VA at Sunrise
The Las Vegas Strip

To learn more you can read the full article online or download the FREE digital publication, and don’t forget to become a member of Creative Cloud to take advantage of all the creative tools that Adobe has to offer.

Improving The Quality Of Your Video Compositions With Creative Cloud

I’ve been spending a lot of time with Adobe video tools lately… everything from videos for the blog, to promotional videos, to help/technical videos.  Here are a few topics that beginners in video production need to think about… audio processing and color correction.

First, you can make so-so video look great with a few simple color correction techniques. Second, a video is only as good as its audio, so you need solid audio to keep viewers engaged.  Hopefully this post helps you improve your videos with simple steps on both of these topics.

To give you an idea what I’m talking about, check out this before and after video. It’s the exact same clip played twice.  The first run through is just the raw video straight from the camera and mic.  Colors don’t “pop”, it’s a little grainy, and the audio is very quiet.  The second run through has color correction applied to enhance the visuals, and also has processed audio to enhance tone, increase volume, and clean up artifacts.

Let’s first look at color correction.  Below you can see a “before” and “after” still showing the effects of color correction.  The background is darker and has less grain, there is more contrast, and the colors are warmer.

Before and After - Color Correction
Before and After – Color Correction

The visual treatment was achieved using two simple effects in Adobe Premiere Pro.  First I used the Fast Color Corrector to adjust the input levels.  By bringing up the black and gray input levels, the background became darker, and it reduced grain in the darker areas.  Then, I applied the “Warm Overall” Lumetri effect to make the video feel warmer – this enhances the reds to add warmth to the image.

Color Correction Effects in Adobe Premiere
Color Correction Effects in Adobe Premiere

You can enhance colors even further using color correction tools inside of Premiere Pro, or open the Premiere Pro project directly within SpeedGrade for fine tuning.

Next, let’s focus on audio…

You can get by with a mediocre video with good audio, but nobody wants to sit through a nice looking video with terrible audio. Here are three simple tips for Adobe Audition to help improve your audio, and hopefully keep viewers engaged.

In this case, I thought the audio was too quiet and could be difficult to understand.  My goal was to enhance audio volume and dynamics to make this easier to hear.

I first used Dynamics Processing to create a noise gate. This process removes quiet sounds from the audio, leaving us with the louder sounds, and generally cleaner audio.  You could also use Noise Reduction or the Sound Remover effects… the effect that works best will depend on your audio source.

Dynamics Processing (Noise Gate) in Adobe Audition
Dynamics Processing (Noise Gate) in Adobe Audition

Next I used the 10-band graphic equalizer to enhance sounds in specific frequency ranges.  I brought up mid-range sounds to give more depth to the audio track.

10 Band EQ in Adobe Audition
10 Band EQ in Adobe Audition

Finally, I used the Multiband Compressor to enhance the dynamic range of the audio.  Quieter sounds were brought up and louder sounds were brought down to create more level audio that is easier to hear and understand.  However, be careful not to make your audio too loud when using the compressor!  If you’ve ever been watching TV and the advertisements practically blow out your eardrums, this is because of overly compressed audio.

Multi-band Compressor in Adobe Audition
Multi-band Compressor in Adobe Audition

Want to learn more?  Don’t miss the Creative Cloud Learn resources to learn more about all of the Creative Cloud tools – the learning resources are free for everyone! If you aren’t already a member, join Creative Cloud today to access all Adobe media production tools.