Tin Can API, previously and variously known as project Tin Can, Experience API, and Next Generation SCORM, is the next generation of SCORM and AICC. It is such a departure in technical and design terms though that it has a brand new name. Tin Can API is ADL's (ADL owns the SCORM specifications) new content communication specification and AICC is adopting it to base its new specifications on. (See Tin Can API Big News). ADL reached out to Rustici Software to develop the standards. Rustici has a solid history of experience with SCORM development and they happily agreed to collaborate with ADL who will still "own" the standard. I recommend the Rustici site for specifics and technical details. This blog provides you with a basic overview of the new standard.
The gist is that Tin Can API can be used to track anything, anywhere, anytime, and send it anywhere. Unlike SCORM and AICC, the API does not dictate what to track or precisely how to formulate what you're tracking. It's wide open. Also, the learning activity may originate anywhere. It does not need to be launched from an LMS.
Technically speaking the API uses REST (standard web architecture style developed by W3C) and JSON (human readable data interchange open standard). Calls are constructed using object - verb - activity based statements and they can be any object verb activity. For example, "Lola" "read" a "Tin Can API blog". You decide what you track and the protocol does not determine what is called or how to describe it. Contrast that with SCORM, which made it mandatory for LMSs to only track a small set of defined information.
What's that about sending it anywhere? The information is sent to a Learning Record Store (LRS) or a couple of LRSs simultaneously. The information in the LRS can be queried by another LRS, an LMS, or another reporting tool. To work with Tin Can API, LMS's can either work with an external LRS or build one into their system.
Let’s say Tin Can gets so far that our favorite LMS decides to adopt the API - how will we know what information to send it in order for it to create standardized reports? Conveniently, for those of us that need to do basic tracking like we currently do, there are intentions to form "an agreed-upon starter list of verbs and activities". This should hopefully set a standard that allows us to take care of those basic reporting needs without reinventing them.
What happens to SCORM and AICC? There will not be new versions of SCORM and AICC. They have come to the end of their road. But I think they'll be around for a while. People will not be fixing what's not broken - assuming it's not broken.
What is refreshing about this endeavor is that Rustici is putting a lot of effort into making it understandable and they are very accessible. They've worked hard to describe it clearly and make the information easily available. There is more commitment to communication with the community than I've found in all the years I’ve worked with the ADL site. No disrespect to the ADL people, they just have different objectives. Already, I like the experience better.
If you read or listen to Rustici folks, you'll notice a lot of excitement about potential. What I'd like to see is that this simplifies our lives in doing what we need to get done while leaving the door open to anything you can find a use for or an interest in.
At the May 8 Intersect meeting, we had a meaningful, informative, and sometimes humorous discussion about Content Management System (CMS) selection and implementation.
Members from three organizations shared their experiences:
In addition, we reviewed the results of a survey that Amy Hoey of the Minneapolis Park and Recreation Board created as a part of their website redesign. We distributed the survey to the Intersect membership and had about 30 respondents. You may download the PDF of the survey findings here.
Thanks again to Laura, Chris, and MN DEED for hosting, and to everyone who participated. It was really an excellent, lively meeting.
Mark your calendars for our next meeting on Wednesday, August 14, 2013 from 2:00-4:00pm. See you then.
In April I had an opportunity to deliver a presentation to PACT in Minneapolis on the challenges of designing a good mobile user experience. As more of us use mobile devices for different activities (50+% of Americans have a smartphone and 52.5 million of us have a tablet), interest in mobile learning (mLearning) continues to grow. But a successful mLearning experience has to be based on a good understanding of the constraints and opportunities that mobile devices present to interface designers and instructional designers. It will come as no surprise that designing for a smartphone is significantly more challenging than designing for a tablet. You can’t just convert or retrofit existing eLearning that has been designed for a PC so that it works well on a smartphone; you have to design specifically for that device. That means thinking differently about navigation, presentation, interactions, and content.
As a follow up to my PACT talk, I decided to sit down with instructional designer Tony Tao, Fredrickson’s expert on rapid e-learning development tools. I wanted to find out more about how well these tools can work for developing effective mLearning experiences.
John: So Tony, let’s start with a broad question: what tools do instructional designers have at their disposal to create mLearning for use on smartphones?
John: Last year, you wrote a very informative series of blog entries comparing Articulate and Captivate. Do these tools have benefits and drawbacks with respect to developing mLearning that we should be aware of?
Tony: Articulate Studio and Captivate are the most popular rapid eLearning authoring tools on the market. When I wrote my blog entries a year ago, these tools were being used mainly for developing regular eLearning courses for desktops and laptops. There are some drawbacks in using them for mLearning. The most obvious problem is the cross-platform support issue. Both tools publish courses in Flash (SWF), which iOS does not support, and so this means you would not be able to view Flash-based courses on an iPad or iPhone. However, the good news is that Captivate 6 and Articulate Storyline already provide options for publishing a course in HTML5 as a way to address this cross-platform issue. However, Articulate Studio users will have to wait for the release of Studio 13, which will become available later this year with the HTML5 option.
Another problem is with some of the user interface elements of the courses that you can develop with these tools. The default navigation buttons, such as Play and Pause, and Next and Back, are good on tablets. Storyline even released a player app for iOS, so that learners can navigate through courses comfortably with their iPad. However, these buttons are still not big enough for smartphones.
John:: The screen size of smartphones definitely presents a design challenge. Navigation targets have to be big enough to accommodate fingers and thumbs in place of a pixel-specific pointer. But the more space you give away to navigation, the less space you have for content. Text size can also be a challenge, impeding the ease of reading.
Reading comprehension is another problem. A study by R.I. Singh and colleagues at the University of Alberta found that reading comprehension is reduced when people read from a mobile device, compared with a PC. One of the reasons for this is that the user has to remember more, because less of the text is visible at one time. This places an additional cognitive load on the user, adversely affecting comprehension. In Mobile Usability (2013), Jakob Nielsen describes this as “reading through a peephole.” This has to be a consideration for instructional designers.
So Tony, moving from authoring tools for mLearning to instructional design for mLearning, the question should be asked: in light of the interface design challenges, and the way in which most people tend to use their smartphones in short bursts for communication and connection, what types of learning experiences and learning content are most suitable for smartphones, compared with desktops and laptops? Is there a way to leverage the strengths of these devices in a learning context?
Tony: This is a great question. It’s very challenging to put as much content on the small screen of a smartphone as we do on the bigger screen of a PC. This dilemma can be addressed to some extent by technology. However, the solution is related more to content.
Most people who learn on a smartphone are looking for “instant answers.” This is different from the “formal” learning experience they have with desktops and laptops. An mLearning lesson running on a smartphone has to be short, maybe two to three minutes, and it has to go right to the point of what the learner needs. A good example is an instructional video. It delivers loads of content in a short period. At the same time, it uses the default navigation controls that come with the smartphone. So it saves some time that would otherwise have to be spent developing customized navigation buttons. Meanwhile, the “auto hide” feature of the default playback buttons frees more space for the content.
Another example is a customized learning app designed specifically for a smartphone. These apps can use the full controls on the touch screen, and offer the possibilities of learning through interactions.
John: I agree – a sweet spot for mLearning is just-in-time performance support. I think of the Geek Squad tech who came to our house last year to help me figure out why our TV and audio receiver were not synching. He needed to be reminded of the setting to change on our TV and found it on a video he watched on his phone.
Podcasts are another option – though they obviously don’t have the visual element. Something to bear in mind with video is the cost of data transfer, which users will have to pay for if they are using their own phone and are not using an available wi-fi network.
Going the app route, I think the challenge is to come up with something that people can dip in and out of rather than dive into for an extended time. For example, games, flash cards, or other types of quizzes would work well. Whether you go with a native app or web app would depend on the context.
Tony: You might want to check out a recent report from the eLearning Guild called “How Mobile Learning is Done.” It presents nine case studies from organizations around the world. Despite the design challenges, mLearning is only going to become more popular.
John: Agreed. Thanks very much for the info and insights, Tony!
In my first post on design thinking, I provided a description of how this term has been commonly understood over the last decade. In this post, I want to highlight some strengths and weaknesses of design thinking. So let’s start with four key strengths:
One of the criticisms that can be made (unfairly) against design thinking is that it assumes a few designers without any specialist domain knowledge can waltz in and fix some previously intractable problem in a complex area like, say, healthcare or social services or financial services. A valid question to ask these designers, especially if they ask apparently uninformed questions of the client team, is, “What the heck do you know about it, anyway?”
Here's Don Norman's response (from "Rethinking Design Thinking"):
What is a stupid question? It is one which questions the obvious. "Duh," thinks the audience, "this person is clueless." Well, guess what, the obvious is often not so obvious. Usually it refers to some common belief or practice that has been around for so long that it has not been questioned. Once questioned, people stammer to explain: sometimes they fail. It is by questioning the obvious that we make great progress. This is where breakthroughs come from. We need to question the obvious, to reformulate our beliefs, and to redefine existing solutions, approaches, and beliefs. That is design thinking.
Ignorance is one thing (more on that later), but a fresh perspective (perhaps from a different field altogether) is another, and that comes from a willingness to ask basic questions that everyone thinks they already know the answer to.
I can’t think of a single project I’ve worked on that involved observing and hearing from customers/users that did not yield valuable insights into how to make something better. And by better, I don’t just mean better for the user or customer. I mean better for the business or agency as well, because happier customers tend to buy more and cost less to support.
For example, the issues that are identified after user research into customer use of an existing product or service tend to fall into three main categories:
Involving customers in the design process does not always result in radical innovation. As Norman says, radical innovation is rare and not often successful when it does occur. However, observing and listening to customers is a relatively easy way to find paths to substantial improvements.
Creating rapid prototypes and then testing them as a way of determining a better solution – before investing all of the time and money required to build – is by no means a new approach, and its value seems to be well understood. Talk with anybody and they’ll say they get it. And yet, it’s not always (or maybe even usually) done. Why? Because of the apparent limitations of budget and schedule, which are often out of the control of a project team. A business manager has a need, which she conveys to the project lead, and says, “We need this next month for no more than [insert gross underestimate here].”
And of course that same business manager demands big improvements over whatever the current state is. But it’s very likely that those improvements are not going to be realized. Why? Because there was never enough time to ask questions, explore alternatives, and get user and stakeholder input/feedback. You can’t have the benefits of design thinking without, well, design thinking.
The “lean UX” approach described by Gothelf and Seiden in their book of that name, which marries the Agile methodology to design thinking, is a good way to deal with this dilemma. The bottom line for managers who fund projects and want to see real improvements is this: allow the time for questioning and exploration, and for prototyping to visualize alternative solutions. Don’t go into the project assuming the solution is already obvious and that the real goal is simply to implement it.
There’s nothing new in having multidisciplinary teams on design projects, even though they are not quite as common as they should be. The value is in having team members from different backgrounds with different strengths and skillsets (e.g., research and analysis, facilitation, lo-fi prototyping, visual design, developing/building, writing, project management, various types of domain expertise, and so on) work together to solve a problem or set of problems. Unfortunately, what I see often happen is that in the interest of keeping a tight budget one person is asked to wear more hats than is really appropriate given their skill set.
Key point: Unless your design team involves genuine collaboration among people with different skills and experiences, you’ll lose much of the value of design thinking.
Now, let’s turn to four potential weaknesses.
In my last post, I said that though I agree with the tenets of design thinking, the label itself is not very satisfying. It’s too fuzzy. And like so many other terms that those of us in the fields of user experience, service design, interaction design, information architecture, etc., use to describe what we do, it requires too much explanation to people who don’t work directly in these areas (such as those who can fund projects). I think the issues that some people have with design thinking result from their dissatisfaction with the label more than with the methods and techniques associated with it.
Design alone is a complex enough word, but the more problematic word in the label is thinking. If told that a project would involve design thinking, a decision-maker who doesn’t have endless leisure to read, ponder, and debate this topic might rightly wonder if anything would actually get delivered. The term alone doesn’t suggest knowledge or action. You wouldn’t know unless it was explained that the term describes both a mode of thought and a methodology. So in this way the term design thinking itself is a potential weakness.
This is the flip side of the fresh perspective coin. On the one hand, there’s power in questioning assumptions and conventional thinking. On the other, practitioners risk appearing uninformed when they’re not aware of what might have already been tried in the past, or of complex political or policy constraints when working on public sector projects, or of the financial impracticality of a proposed solution, or of the unintended consequences of a proposed solution.
The obvious response is for some designers to focus on particular domains, like healthcare, financial services, taxation, or transportation. But then some of the power of fresh perspectives and “stupid questions” may be lost. So there needs to be a balance: designers need to learn quickly and clients need to allow that learning, with the understanding that the fresh perspectives that come from outsiders will be well worthwhile.
This criticism has been made of consultants in a variety of contexts – that they appear, deliver their assessment and recommendations, and then leave. It must be admitted that for initiatives that aim to address complex problems, short-duration design thinking projects involving consultants may accomplish only so much.
An important part of a design thinking project should involve a plan by which the initiative is made sustainable beyond the formal involvement of the consultants. This will likely involve some teaching and knowledge sharing by the consultant team. A good example of this is described in a video about a service design thinking project for the Lewisham Public Housing Options Agency. Oliver King of London-based Engine Service Design has also made the point that his firm's engagements typically end with some type of training.
There’s a streak of idealism in some of the literature on design thinking and service design that is both engaging and maybe, just maybe, a little bit naïve. Not all challenges in social services, for example, have clear, feasible solutions. Practitioners of design thinking need to be careful that they do not present it as a panacea. Again, there needs to be a balance – in this case, between imaginative idealism and practicality. Not every project needs to result in radical innovation to be worthwhile. Something as modest as substantial improvements, which wouldn’t otherwise happen without design thinking, will still deliver in increased sales and/or a reduced need for customer support many more times the cost of achieving them.
For me, the strengths of design thinking far outweigh the potential weaknesses. Still, those of us in the field need to be very aware of these potential weaknesses and work to counter any negative perceptions stemming from them. Our continuing challenge is to demonstrate the business value of design thinking in the face of often constricted budgets and narrow horizons.
Design thinking is a term that causes some people to cheer and others to express irritation, or, in the case of Don Norman, to do both at different times. In a 2010 article called “Design Thinking: A Useful Myth,” he wrote:
Design thinking is a powerful public relations term that changes the way in which design firms are viewed. Now all the mysterious, non-business oriented, strange ways by which many design firms like to work is imbued with the mystical aura of design thinking. Yeah, we do things differently than you do: that's why we are so powerful and unique.
But on March 13, 2013, in an article called “Rethinking Design Thinking,” Norman declared he had changed his mind, sort of:
I am here to say that I now have rethought my position. I still stand by the major points of the earlier essay, but I have changed the conclusion. As a result, the essay should really be titled: Design Thinking: An Essential Tool.
And in a new, revised edition of Norman’s classic The Design of Everyday Things, the chapter that was called “Human-Centered Design” in the earlier edition is now called “Design Thinking.” (The new edition will not be released until November 2013, but you can find the preface on his jnd.org website.) He explained his change of mind this way:
[T]he more I pondered the nature of design and reflected on my recent encounters with engineers, business people and others who blindly solved the problems they thought they were facing without question or further study, I realized that these people could benefit from a good dose of design thinking.
Even before I read Norman’s posts, I was both intrigued and dissatisfied by the design thinking label. But I’m coming to terms with it because I don’t think it’s going away anytime soon. For example, it’s described as one of the foundational elements of “Lean UX” in Jeff Gothelf and Josh Seiden’s recent book. It’s in Robert Curedale’s book on 200 ways to apply design thinking, and it appears frequently in the literature on service design (e.g., This is Service Design Thinking).
Concerns about the label aside, I agree with the tenets of design thinking – it is an approach that can be very valuable. At the same time, those practicing it should be aware of some potential weaknesses. But before I give my two cents on its strengths and weaknesses, it’s probably a good idea to describe what design thinking is. If you already know, then skip what follows, go on to the next post, and let me know what you think.
First, if you’re looking for a rigorous academic analysis of design thinking, see Lucy Kimbell’s two-part article in the journal Design and Culture (November 2011), called “Rethinking Design Thinking.” (It’s just coincidental that Don Norman’s later blog entry has the same title.) In Part 1, Kimbell explores the provenance of the term and notes that one of the earliest discussions of it is in Peter G. Rowe’s 1987 book Design Thinking. (Rowe is now a professor of architecture at Harvard.)
The way that design thinking has been understood and used most often over the last decade has been strongly influenced by Ideo founder, David Kelley, and Ideo CEO, Tim Brown. In a 2009 profile in Wired, Kelley recounts a meeting with Brown in 2003 in which he had “an epiphany: They would stop calling Ideo's approach 'design' and start calling it ‘design thinking.’ ‘I'm not a words person," Kelley says, 'but in my life, it's the most powerful moment that words or labeling ever made. Because then it all made sense. Now I'm an expert at methodology [my emphasis] rather than a guy who designs a new chair or car.'" Ideo describes its design thinking process as –
…a system of overlapping spaces rather than a sequence of orderly steps. There are three spaces to keep in mind: inspiration, ideation, and implementation. Inspiration is the problem or opportunity that motivates the search for solutions. Ideation is the process of generating, developing, and testing ideas. Implementation is the path that leads from the project stage into people’s lives.
An even better source for understanding design thinking methodology (or so I think) is the British Design Council. Its Double Diamond Design Process Model is organized around four stages: Discover, Define, Develop, Deliver. The first diamond represents the exploration of divergent possibilities during the Discover phase, followed by a convergence on the definition of the problem to be solved. The second diamond represents the second period of divergence as alternative design solutions are prototyped and tested, ending in a second convergence at the point of an accepted solution. As is evident from this process of divergence and convergence, a key feature of the design thinking process is that it is highly iterative, beginning with exploration, questioning and insights about and from end users/customers. Norman underscores the importance of the “iterative and expansive” quality of design thinking:
Designers resist the temptation to jump immediately to a solution to the stated problem. Instead, they first spend time determining what the basic, fundamental (root) issue is that needs to be addressed. They don’t try to search for a solution until they have determined the real problem, and even then, instead of solving that problem, they stop to consider a wide range of solutions. Only then will they finally converge upon their proposal. This process is called “Design Thinking.”
Another key feature of design thinking is that it is human-centered as opposed to technology- or organization-centered. In other words, a goal of design thinking is to be strongly empathetic toward users/customers, working to make technology and organizations serve them rather than vice-versa. The design thinking methodology includes users and stakeholders in the design process, so that they can provide input and feedback at all stages. Being human-centered, however, does not mean being impractical: design solutions need to be technically and financially viable.
Design thinking is also highly collaborative and multi-disciplinary, involving various participants with different types of expertise and domain knowledge, depending on the specific objectives of the project. It is not, in other words, the exclusive property of professional designers.
You might be thinking, as I have, that this all sounds like user-centered design (UCD). The distinction between design thinking and UCD is not completely obvious because they share so much in common. They both involve –
So what’s different?
Design thinking harnesses the power of intuition. It is a process, evolved gradually by designers of all kinds, which can be applied to create solutions to problems. People of any background can use it, whether or not they think of themselves as designers. It uses the subconscious as well as the conscious mind, subjective as well as objective thinking, tacit knowledge as well as explicit knowledge, and embraces learning by doing.
So yes, you can identify differences between UCD and design thinking, though these are differences of degree more than kind. In the ongoing debates between lumpers and splitters, I’m usually more in the lumper camp: I see design thinking, service design, experience design, empathic design all as children of human- or user-centered design – these are all closely related in goals, tools, techniques, and objectives. For me, the similarities are more significant than the differences. Regardless of specific labels, what’s most important to me, and what attracts many of us to these methods, is the potential to come up with at least incremental innovations to continue improving products, experiences, processes, environments, and organizations. In the next post, I’ll focus on the strengths of design thinking as some potential weaknesses.
Author’s Note:This blog entry is the third in a series I started to capture what I thought were important points from the April 2013 Fredrickson Roundtable for Learning Leaders meeting. The discussion topic of this Roundtable meeting was Building a Better Relationship with your LMS Vendor. Here's a link to my first entry and the second entry in this series.
The April 2013 Fredrickson Roundtable for Learning Leaders meeting topic addressed the question of how to build a successful relationship with your LMS vendor. The discussion around answering the question on common misconceptions or mistakes made in an LMS purchase or upgrade was quite fruitful. Here’s the list we compiled from that discussion. Notice that the list is affirming – meaning it's a list of the positive things to look for and do to be successful.
What other items belong here? Feel free to share your thoughts and ideas in the comments below.
Author’s Note:This blog entry is the second in a series I started to capture what I thought were important points from the April 2013 Fredrickson Roundtable for Learning Leaders meeting. The discussion topic of this Roundtable meeting was Building a Better Relationship with your LMS Vendor. Here's a link to my first entry in this series.
The April 2013 Fredrickson Roundtable for Learning Leaders meeting topic addressed the question of how to build a successful relationship with your LMS vendor. During the sharing of tips, both our vendor representative and our audience raised the question of support. Everyone agreed that defining and understanding the support to expect to receive from your LMS vendor was important. In discussion, the following list emerged as questions to ask your vendor to help you understand what they mean by support.
Questions to ask to probe the support question:
What happens after I sign?
Are there other questions that you’ve used when trying to understand the support question? Please share them in the comments below!
The April 2013 Fredrickson Roundtable for Learning Leaders meeting topic addressed the question of how learning professionals can build a successful relationship with their LMS vendor. The session was so packed with information and ideas worth summarizing, that it will take more than one blog entry to do it!
This blog concentrates on sharing the top tips, from both the vendor’s and client’s perspectives, for building and maintaining your relationship. Feel free to add to or comment on these lists below.
Tips for vendors in establishing and maintaining a good relationship with your LMS/TMS clients:
Tips for clients to establish and maintain good relationship with an LMS vendor:
Emulating successful people and learning how to duplicate their methods and models can have big benefits. Sometimes, however, it's better to say no to learning how to do something yourself and say yes to the experts instead.
Indeed, we can learn by studying the attitudes and behaviors of successful people. We can be coached, read their books, attend classes, or learn from YouTube videos about everything from models for business change to PowerPoint presentations.
Not everything is worth taking time to learn, however.
Take, for example, replacing a walking belt on a treadmill. Fixing a walking belt that slips is so easy to do that apparently no one ever calls a technician. So when the walking belt on my treadmill started slipping a year ago, I couldn’t find anyone to fix it. I didn’t want to fix it myself, but even Google search couldn’t produce a website or local phone number for someone to call. To be sure, there are numerous websites and YouTube videos that demonstrate how to fix a walking belt that slips. I watched lots of them. But my effort to duplicate what I saw was ineffective.
On the verge of a decision to get rid of the treadmill and buy a gym membership, I decided to try one more time to find a technician who could help me. I called the manufacturer. They had a technician call us and make an appointment to come to our home. When he arrived, I showed him where the treadmill is located and then went back to doing my own work.
As it turned out, the treadmill needed two new belts. The technician installed them, and now I can use the treadmill again.
What the websites and videos don’t discuss is the “other” belt that often needs to be replaced when a walking belt is slipping. I’m happy that I don’t have to figure out how to do something I never wanted to do, and because I’d rather go downstairs than to the gym when I want to walk.
Now, I also have more time to study things that do matter to me, like Presentation Zen and Learning in Adulthood and…well, you get the picture.
It's a common misconception that all teenagers are web savvy by virtue of being digital natives - that simply because of their age, they know how to use the web more effectively than older digital immigrants do. While it's certainly true that some teens are web super users, it's not the norm. According to a new study by the Nielsen-Norman Group (NNG), "Teens are not as invincible as some people think. Although teens might feel confident online, they do make mistakes and often give up quickly. Fast-moving teens are also less cautious than adults and make snap judgments; these lead to lower success. Indeed, we measured a success rate of only 71% for teenage users compared to 83% for adults."
The three reason teens perform worse are as follows:
The biggest challenges teens faced in the NNG study were on "large sites with dense content and poor navigation schemes. Government, non-profit, and school sites were the biggest culprits of poor usability."
We have seen many, many adults struggle with exactly these issues, and so it's interesting to see that younger people struggle with them even more. If you have a site with a target audience that includes teenagers, here are some quick guidelines to bear in mind, courtesy of the NNG study:
It's important to emphasize that people under the age of 20 are not a monolithic group. There are differences in behavior between older and younger teens, as there are between teens and kids under 13. The NNG study is a great place to get some basic guidelines, but it's also wise to listen and observe as users in your target audience interact with your site.
Thanks to everyone who attended our webinar yesterday on service design in the public sector. We had a great turnout from various state agencies, the Metro Council, counties, and cities. If you want to review the presentation or share it with others, here’s a recording of our January 30 session. For those who couldn’t attend this session, we’ll be doing it again on February 13 at noon, Central time.
In our discussion, we recommended some resources that you might want to check out further. I’ve listed these, plus a few others, below:
“10 Ways that Design Thinking Can Save Government,” by Steve Ressler, GovTech.com, January 24, 2013.
I listed this resource first because if you are looking for a quick overview, this isn’t bad. The best part is the list of 10 “design thinking questions” to get you thinking about ways to improve service delivery in your organization.
This is Service Design Thinking: Basics, Tools, Cases by Marc Stickdorn, Jakob Schneider, and co-authors. John Wiley, 2011.
A very good overview of the subject that provides insight into the sheer scope and variety of service design projects.
Outside In: The Power of Putting Customers at the Center of Your Business, by Harley Manning and Kerry Bodine. Forrester Research, 2012.
Even though the term “service design” doesn’t make an appearance in this book, everything Manning and Bodine discuss is based on the principles and practices of service design. The examples in this book are particularly useful. I wondered if the authors owe their title to the Government of Canada, which in 1998 introduced a new "outside-in, citizen-centred approach to Government of Canada service delivery.”
The Customer Experience Revolution: How Companies like Apple, Amazon, and Starbucks have Changed Business Forever, by Jeoffrey Bean and Sean Van Tyne. Brigantine Media, 2012.
A short, clear, persuasive case for the business value of a systematic focus on customer experience.
The Service Design Network.
This is a European-based organization with members from around the world. Birgit Mager is a leader of the SDN and one of the founders of this relatively new field.
“The Road Ahead for Public Service Delivery: Delivering on the Customer Promise,” by the Public Sector Research Center of PriceWaterhouseCoopers.
Well worth a read for its expansive look at the value of re-thinking service delivery in the public sector, with a focus on breaking down the hierarchies and vertical silos that can get in the way of a truly customer-centric approach.
Service Design Research
A good list of articles on designing services written between 1976 and 2008.
You also might want to check out the recordings of our two previous webinars:
Online Forms: Extreme Makeover
Self-Service Nation: Improving the user experience to maximize digital government efficiency and ROI
I pulled up to my garage one cold 11-degree morning and pressed the button on the garage door opener remote. Nothing happened. Pressed it again. Nothing. So began a just-in-time learning process for me.
After opening and closing the door manually, I called the Subject Matter Expert (SME) I always call when something needs fixing at my house: my Dad. He went out into his own garage and looked at the opener – a similar model to mine – and started to describe the mechanisms to me and what I could do to potentially fix it over the phone…..in great detail and assuming I knew what the parts were called. As I struggled to understand, I asked some questions like, “Do you mean the small piece of metal between the two nuts?” and, “Is that in the motor behind where the light bulb is?”
Later that day, I was at a client meeting discussing how we work successfully with SMEs, and my conversation with my Dad came back to me. I’ve had conversations like this garage-door-trouble-shooting one with him many times, and the process I automatically go through with him to allow me – the novice – to understand him – the expert - applies to working with SMEs in any setting. Here are a few tips that came to mind for working successfully with Subject Matter Experts , whether you're designing training, receiving coaching, or are a Subject Matter Expert yourself.
What are your top tips for exchanging information with experts?
As Fredrickson’s Staffing and Recruiting specialist, one of my primary goals is to match our employees with appropriate opportunities within our clients’ organizations. I feel my job has been done well both when our client is happy with the work of our resource, and our resource feels positive about their work experience. However, many elements contribute to our clients’ “happiness,” and likewise, our resources will feel positive about a particular work experience for a variety of reasons. This three-part blog will examine what makes for a successful staffing engagement from my perspective, from our clients’ perspective and from the resource’s perspective.
Perspective #1: Staffing Specialist
After each engagement, we reflect upon and analyze the outcome. Did our employees accomplish what we sent them out to accomplish? Did they meet or exceed their expectations? The first element contributing to the success of a staffing engagement is having clear expectations set and met.
To set expectations clearly, we ask our client to consider:
Clear expectations allow us to choose precisely the resource best for you. Why bring a person in to work full-time when there’s only thirty hours of work to be done per week? Or, if the resource is able to accomplish your goal in thirty hours, are there other areas they can assist you with that extra ten? It’s helpful to be prepared to use the resource’s time to your advantage.
Once our resource is onsite, success depends on clear direction throughout the engagement.
It’s also important to remember that interim workers are co-workers too. Interim workers have many of the same needs as your own employees. Providing the context and background for the work being done will allow for deeper understanding of their piece in the puzzle. It can be easy to forget that an interim worker will also benefit from the same “nuts and bolts” information that may seem common knowledge to you and your team. Time invested up front in showing the interim worker how to navigate the corporate campus, getting access to the right systems and IT resources, and so forth will pay dividends later in terms of efficiency.
We also appreciate it when our client realizes that good feedback is constructive too! We understand that you're busy, but any information provided about the performance of our resources is valuable. We want to continue to do right what we’re doing right and eliminate pain points.
In short, anticipating obstacles prior to the engagement will ensure an effective and successful relationship between the resource and the client.
A while ago, I came across a blog post from Clive Shepard titled, Do instructional designers need to know about what they are designing?
I was eager to dive into Clive’s blog and finally get some clarity on this issue. Instead, what I read was a classic pro-con argument for both positions, but with only “I’m undecided” for a conclusion. A bit deflated, I shared the article with our team, asking others to comment. What I got was a more satisfying answer, one that illustrates how we typically work with our clients. Here’s what one of our instructional designers posted:
“I think the strongest learning engagements happen when the subject matter expert and the instructional designer are really partners in the development of the project. If I as a designer feel comfortable learning from the SME, returning to them with questions, and remaining open to their suggestions, I think the odds of the project being successful rise dramatically. Partners who have mutual respect for each other’s expertise build very cool things.” -- by Cim Kearns
Cim captures, for me, the essence of our approach to clients. We recognize the expertise you bring to the relationship. You know your business. You know your people. You’ve seen what works and what doesn’t. You understand what you know. And we meet you there, bringing to bear our expertise in teasing out the unknowns, asking questions, probing for ways to connect and partner on the solution that best addresses the business need.
Here’s an example of a recent project where our client brought their expertise and allowed us to fill in the gaps.
We worked with a client in the dental industry that was implementing a major new system change. Other than individually visiting the dentist when needed, our team didn’t know much about this business. However, we do know how to design training, especially when a complete system change is involved. We understand that training under these circumstances is about more than just showing someone how to do something, it is about preparing the groundwork for the change, respecting people’s need to see the benefits of the change and not fear it.
Our client got that part too, and had taken steps to begin the change process themselves. Instead of wasting time questioning their ability to handle this aspect, we respected what they’d learned from it, and avoided undermining it in what we designed. We asked our client to consult on the background of the system, and they trusted us to design a strategy and supporting content to assure learning retention and skill acquisition. We learned a lot about and with each other along the way. The result was not only a great success for the training, but a strong partnership and respect for each other.
So, in the end, I agree with Cim. When we partner with clients and respect their knowledge and expertise, meeting them at that point and along the way learning things we never would have known otherwise, magic can happen. Ultimately, then, I’m not undecided. I believe it doesn’t matter if we know about the subject we are designing. The question is—are we willing to learn, to partner with those that do, and to create great things together?
About ten years ago, I worked as a training manager of a small company made up of several customer service teams. This was before tools such as Articulate and Captivate had made the development of eLearning affordable for 1-2 person training departments, which meant that most of our learning took place in the classroom. For two trainers who recognized the inefficiencies of our classroom-based training curriculum and wanted to dip our toes into eLearning, it was a frustrating experience. When we were told that our customer service representatives would need to be trained on a new version of Microsoft Office and we would need to develop more classroom training, we just looked at each other and sighed. More time in the classroom meant less time trying to develop more efficient solutions to our training needs.
Our disappointment did not last. In working through the materials we were preparing to teach, we discovered that something called SharePoint was included in the new version of MS Office. After listening to our IT team describe SharePoint’s features, it became clear that it would provide a platform from which we could realize our grand technology dreams. We saw a bright future for our customer service teams where the laminator could be laid to rest: Gone were the days when team members had to put their customers on hold and ask team leads for answers to tricky questions. No longer would it be necessary for team members to create their own job aids. Our little training department could build intranet sites for each of the customer service teams without any special development skills.
I immediately began sorting through all the different chunks of information each team needed. Finally, I could help people transform information into knowledge and “manage” it. When my especially creative colleague told me he had discovered a superhero generator online, I was ecstatic. Not only were the team portals going to be useful, they were going to be cool. We decided to assign a different superhero to each of the teams with the thought that the superhero would provide the fun factor that would draw team members to their portals.
Then my colleague and I really got down to business. We gathered information. We sorted it. We had it reviewed by team managers to make sure it was accurate. And we, the dynamic duo, packaged all that information in a super heroic package so that it became the usable, readily accessible knowledge that would dazzle the teams. Finally, we didn’t just announce the completion of the team’s SharePoint portals, we distributed superhero playing cards to each of the teams and cleverly informed them that using the portal would turn them into superheroes.
My guess is that you know where I’m going with this tale. We built each team portal and the team members did not come. Team members continued to yell over cubicle walls to find answers to questions with their customers on hold. Job aids continued to be made by individual team members and circulated among the teams. In spite of our efforts to persuade team members just how much easier it was to access information through the portal and how much more reliable that information was, our super heroes had failed. The SharePoint portals were ignored.
Other projects needed to be completed so we didn’t spend too much time wallowing in our disappointment. We comforted each other by insisting that team members were just afraid of change. They would eventually see the error in their ways. We just had to keep pushing them to the portals. And we were right. The teams did end up finding the portals to be very useful. But it had absolutely nothing to do with our super powers. The team members began to use the portals once they realized that they had the ability to load information onto it themselves. Once the teams themselves took ownership of their portals, they became the useful, fun intranet sites that we had imagined we were building. Team members would volunteer to personalize each portal and tap into wells of creativity they had no idea existed. Managers would hold contests encouraging members to use the portal. The portals became a great success.
I’ve devoted much of the past ten years to eLearning instructional design and I’ve never forgotten the lesson I learned while watching my super heroes crash and burn. When I neglect to involve my end user in a project during development, its chances for success diminish. When I don’t test a project at various stages with my end user, success becomes even less likely. And if I don’t evaluate a project and tweak it according to feedback, then there is a good chance that time and money will be wasted.
A couple of years ago, I used SharePoint to help team members understand the features of a brand new system. Keeping in mind that ordinary citizens can be so much more powerful than super heroic trainers, I made sure to take advantage of the discussion forum feature in which team members were able to ask and answer questions. The team members were brilliant and shared their knowledge as well as their ignorance freely.
Next time that you are asked to play the role of a super hero and save ordinary citizens from their ignorance, remember my favorite mistake. Don’t assume that you have all the answers for a learning solution. Listen to what your learners have to say about their plight as well as your proposed solution and you’ll tap into super powers you never knew you had.
That might be the $50,000 question on the minds of instructional designers who create learning opportunities for people in corporate America. Seriously … we want people to love to learn, because the more they love it, the more enthusiastic they will be about learning. This attitude can only produce a win-win for any business entity we work with, and warm feelings of personal success for instructional designers.
Psychologists tell us that people are born with the desire to learn. Learning is how we make progress, reach our goals, challenge ourselves to new and exhilarating heights!
But wait … that’s not how many people experience learning. As I mentioned in my last post, the act of reading can be traumatic and the source of failure for people in the workplace. When people are afraid, their reptilian (the amygdala) brain is activated. The amygdala is important for visual learning and memory, but when we feel afraid or threatened the amygdala effectively shuts down the thinking brain and focuses instead on survival and safety. This is an instinctual response that can’t be easily overcome unless, of course, people feel safe.
This is not an overstatement of how it is for many people in the workforce. A friend of mine is teaching how to solder joints to a class of sixteen people who are being retrained for new jobs.Family Handyman says soldering joints is easy to do with a little practice. Soldering electronic components isn’t easy, though, and if it really was as easy to do as Family Handyman says it is, we wouldn’t experience as many leaky water pipes and failed computer parts! Sixty percent of the people in that class failed a simple test. Why? Because they had to pass the test in order to keep their jobs. They were afraid.
What can we do with this information?
Aarron Walter, the lead user experience designer for MailChimp, says that when a user’s basic needs are met, they can begin to experience the pleasure associated with a task such as learning. The basics are: An interface that’s functional, reliable, and usable. When basic usability needs are met AND when course content is designed to enable learning rather than hinder it, the user can start to love learning.
Walter quotes molecular biologist John Medina:
Emotionally charged events persist much longer in our memories and are recalled with greater accuracy than neutral memories. The prefrontal cortex … the part of the brain that governs “executive functions” such as problem-solving, maintaining attention, and inhibiting emotional impulses … assists other parts of the brain, especially the amygdala … When the brain detects an emotionally charged event, the amygdala releases dopamine into the system. Because dopamine greatly aids memory and information processing, you could say the brain gets a chemical Post-It note on a given piece of information. It is what every teacher, parent, and ad executive wants.
Designing an interface and course content with positive emotional stimuli—a pleasurable experience—builds engagement with users. To engage an audience, we must let our personality show through our work; we must give people something to relate to.
At the November 13 Intersect meeting, more than 60 of us (a record!) gathered to talk about website and intranet content strategy. There are many challenges and intricacies around getting a content strategy in place, but in the long run it sure can make the day-to-day work of IT and communications staff easier. Plus there is great benefit in the content creators throughout the organization knowing their freedoms and boundaries.
We had members from two organizations facilitate our discussion:
Please note that the content of any presentation provided here is the property of its author(s) and may only be reused with permission.
More resources (thanks to Lucinda Plaisance at the Metropolitan Council):
Thanks again to Drew VanKrevelen and the Minnesota Lottery for hosting, to our facilitators, and to everyone who participated. It was an excellent meeting.
Mark your calendars for our next meeting on Tuesday, February 12, 2013 from 2:00-4:00pm. See you then.
Translation. It’s what everyone wants to talk about. When I was doing the conference circuit, about 25% of the questions I was asked were about translation. And that’s not the topic I was there to present! There are more and more companies popping up whose business model is solely based on translation and foreign language audio recording (all saying they specialize in preparing audio for eLearning). And, even today, I received an email from Articulate talking about the ease of translation in Storyline.
It’s easy to make translation look easy. And I agree with that after looking into Storyline’s translation ability; they have a nice system in place to help translation along. But after working on several translation projects to this point, I have learned, very clearly, that translation is a lot more than turning “yes” into “oui.” Especially where eLearning is concerned.
There are layers to translation projects. Not all projects need all of these layers, but as you begin it is important to consider them all in relation to your project objectives to avoid surprises later and make sure you’re getting the outcome you truly want. It’s also important to recognize and plan for how these layers impact each other and how you will sew them together in your final product.
Here are the five layers I’ve defined so far. As I continue to work in this field, I have no doubt that I will continue to define and refine this list:
It’s easy to get stuck thinking of translation as just the Language Layer. But, I think, after looking at the layers above, you can start to see the importance of each and how they fit together.
In the end, my one and only message is simply, translation projects are not always complicated, but they certainly require a lot more than just sending things to a translator. And with more and more translation needs arising, it’s important to educate ourselves and our clients (internal and/or external) about the entire scope of translation so that our projects can be smooth, accurate, and, ultimately, a resounding success.
As an instructional designer, I spend a good deal of time thinking about stories when working on a learning project. My primary goal during my early meetings with a client is to determine the overarching story that will structure the learning event. Then I gather as many anecdotes as possible from subject matter experts so that I can provide context for concepts, illustrate technical points and generally keep the learner engaged.
Listening to subject matter experts tell their stories has always been one of my favorite moments in the design process, not only because the tales from the front enrich and enliven a learning event, but also because the stories interest me and sometimes move me in unexpected ways. I’ve laughed out loud learning the creative ways employees have misinterpreted a policy and been moved to tears while listening to a nurse explain how she saved the life of a teenage girl through an impromptu training session in the middle of a night. But it was only recently that I realized how important the act of storytelling itself was to a project’s ultimate success. In telling their stories, the subject matter experts weren’t just providing me with useful material, they were bringing me into their world, showing me who they were and thereby establishing connections.
In reflecting on the power of the subject matter experts’ stories, I came to realize that I didn’t just use stories to demonstrate a successful learning strategy or illustrate my expertise with a certain tool. Telling stories helped me establish connections as well. Because I don’t work directly with subject matter experts, I can’t draw from shared experiences to inspire trust. What I can do, however, is share stories to help my partners understand who I am and how I work. A story may not be worth 1000 hours spent in meetings, conference calls and side-by-sides, but it can help strengthen a working relationship.
My newfound recognition of a story’s power to build partnerships led me to think about the stories I’ve shared in the past with subject matter experts. Which stories seemed to break through barriers? And which ones created them? What I discovered was that the more honest a story, the more effective it was, even if meant exposing mistakes or weaknesses. In fact, I realized that, more often than not, it was my willingness to show my vulnerability, rather than a demonstration of my strength, which enabled a connection with my partners. When I explained the reason for an instructional design choice by describing a mistake I made, the client didn’t run away. Instead, they listened more closely to what I had to say and ended up opening up to me as well.
Even though I acknowledge that the power of honest stories can expose vulnerability and strengthen relationships, I don’t think I am brave enough to share my mistakes and flaws with clients on a regular basis. And I certainly don’t recommend exposing weaknesses as a business strategy. However, I believe that a willingness to share such stories when relevant and appropriate can and does strengthen partnerships. Since I know that strong partnerships result in successful learning solutions, I think exposing my own vulnerabilities can be worth the risk.
I love to read. My mom liked to tell the story that her dad taught me to read from Zane Grey novels before I started kindergarten. Even if she stretched the truth a bit, it’s a good story. Because reading has been important to me for so long, I was surprised to learn that reading is not something people do naturally.
It’s true that people are wired for language. Language is a normal, natural thing for humans to learn, and children can speak whatever language they’re exposed to with no special training. But we’re not wired to read or to write. Reading and writing require a huge mental effort over a long period of time. We received lots of instruction and practice in elementary school.
With good training, many of us now read with ease and for pleasure. We’ve forgotten the struggles we had while learning to read, unless we’re around children who can remind us. But for people who didn’t acquire good reading skills, and for those who are learning English as a second language, reading poses substantial challenges. The struggle to read can be so overwhelming and taxing, in fact, that the brain’s capacity to comprehend is limited. This can have serious consequences for workplace performance and in people’s lives.
Educational researchers seeking the answer to that question in the 1970s studied two possible options. Some researchers thought “context” —recognition of whole words and phrases— was the most efficient way to read. Speed-reading methods developed during the 1970s and 80s and used to train people to read whole words and groups of words fast may be based on this method. Others believed good readers recognized “features”— lines and contours—used to form letters.
It turns out that we read more efficiently, and more importantly, that we remember more of what we read when we use “feature-driven” reading. When readers learn to recognize letter shapes and the rules for writing words, and commit these things to memory, reading does become automatic.
The majority of eLearning courses have words—sometimes, a lot of words. Instructional designers can minimize barriers and assist all readers, and especially readers who have low-reading skills, by following some usability guidelines.
Following usability guidelines can help to ensure that when people read text, their reading and learning experience is improved.
Dirkson, Julie. 2012. Design for How People Learn. Berkley: CA. New Riders. Johnson, Jeff. 2010. Designing with the Mind in Mind. New York: NY. Morgan Kaufman Publishers.
As a learning professional, it is incredibly refreshing to take a step back from “doing” learning and instead be a participant in the process. Taking the time to be in the same place as others who have similar ideas, wants, needs and desires really enhances the learning experience. As I’ve been reflecting, processing and synthesizing the information overload from the Masie Learning 2012 Conference last week, I’ve started to draw some parallels and conclusions about trends in the learning profession. This post shares some of those ideas.
The first concept that I’ve synthesized from a variety of conversations centers around the idea of what it means to be a life-long learner. It was very interesting to note that General Colin Powell, Marshall Goldsmith and even Elliott Masie think about learning as something well beyond the boundaries of the “corporation” and as something that continues to drive and influence them at all stages of life. I think that as part of the work we do in our profession, we sometimes get wrapped up in how the organization we work for can drive our perception of both what and how we need to teach people. However, if we create a culture of learning, then the expectation that learning is part of who we are seems to have a greater chance of success, no matter the path in life we choose.
Another synthesized concept for me was the idea that “mobile” and “social” are just technologies that allow us to deliver personalization, presence and collaboration. To some extent, technology is just giving us more and varied ways to approach the age-old business problem of where and how we do training and how we get teams to better perform and communicate.
While this theme pervaded the Learning 2012 conference, it was driven home to me again just today as I saw Chris Laping, SVP of Business Transformation and CIO at Red Robin talking at YamJam12 about the fact that we are still facing the same problems today that we were 40 years ago. His thesis was that we don’t work to solve problems; we just work to make problems go away. In other words, we don’t take the time to get to the root cause and address it, rather we slap a Band-Aid on it so it stops bleeding, and declare it fixed.
So the question becomes, in the rush to incorporate technology into the picture, are mobile and social just more Band-Aids that we are trying to apply, or can we be deliberate in a way that allows us to appropriately use the tools to address the root cause of the business issue we are trying to solve?
A third concept that struck home was the idea that there is no one solution for mobile, and that if you think you have the answer, just wait a day. Something else will happen to cause you to question your solution or make you redo already done work. There were overwhelming nods of agreement from most presenters and participants at the conference regarding this point. A trend to support mobile in the workplace is to take a step back, define the strategy and standards, and then decide how to move forward.
This approach to the mobile question became more solidified for me upon my return home. As part of a focus group discussion with one of our clients, the demand for a mobile answer was strong, but the ability of the team to speak to the strategy and the standards for adoption are still missing. Without those answers, moving forward to mobile at this time doesn’t make any sense.
Finally, throughout the general session presentations, I found myself most drawn to the stories presenters told, which truly made my learning an immersive experience, not just an event. Jenny Zhu spoke about the experience of teaching English in China and opened my eyes to the blindness of our cultural biases.
Ken Davenport gave me a nugget of insight to watch for the physical reaction of the audience in order to figure out what to pay attention to when creating learning, so that you can enhance the chances of making learning stick. And John Ryan touched my heart and reinforced a core belief of mine that every person offers us value when we take the time to really get to know and understand who they are.
This theme of making learning an experience, not just an event, really speaks to a trend I’ve been observing among our clients. Creating stand-alone learning solutions is often no longer the only answer. Clients are taking a step back, and really looking at ways to wrap learning, thinking through all the elements to be delivered, so that learning has a greater chance of success. We see a trend toward solutions that address communication, learning, change management and performance support all together with an eye to greater impact and chance for success. And it’s a trend I applaud, as it gives our profession way to offer value to learners that lasts well beyond the event, and maybe even beyond the “corporation”, putting us on that path to life-long learning.
If you attended the conference, I invite your comments and questions. If you didn’t attend, but this post sparks thoughts, ideas or questions, I welcome the opportunity to continue the conversation.
Designers are often asked to “solve a problem” that exists. These problems need to be fixed because people are unhappy about something that’s either happening or not happening—there’s a perceived need. It’s commonly understood in the business environment that “design” is a solution to a problem. That other kind of design activity—creating—is reserved for interior decorators and artists and is more easily recognized in reception areas than in the cubicles and offices where people do their work.
What does a problem look like?
Company X has a problem. Upset customers have lodged dozens of complaints about rude and unhelpful customer service representatives. The manager of this group of employees is convinced the problem can be solved by better training. She convinces her supervisor that the CSRs need training designed to solve their problem. The supervisor agrees to find a solution.
What does a solution look like?
Company X CSRs are put through a 2-day training program. They’re told that customer surveys will be used for 30 days following their training to collect data that will verify the effect of training.
What is the outcome?
Customer surveys do indicate that the problem—irate customers—is less intense. Complaints against the CSRs drop by 50 percent and the supervisor’s successful handling of the problem is recognized during a meeting the following month.
Then, within a matter of four months, the 50 percent drop in complaints disappears. What happened?
According to Robert Fritz, who worked with Peter Senge and then went on to found DMA and develop the Technologies for Creating curriculum, the following happened: "The problem led to action to solve the problem. The action lessened the problem. Less action was needed to solve the problem. Less attention was given to the problem, and the problem resurfaced. Problem solving," Fritz explains, "provides a way to organize our focus, actions, time, and thought process. Designing solutions to problems gives the sense that something important is being done."
He adds, “…it’s an illusion.”
What’s the alternative if designing solutions to problems doesn’t work?
Creating and problem solving involve very different states of mind. Creativity activates positive thoughts while problem solving is focused on what is negative. Creating is forward focused; it’s building toward the future. Problem solving is focused on the past; it’s resisting what has been.
What does creating look like?
Fritz has identified five steps in the creative process, which are types of action (not a formula). These steps are:
Notice that the word “problem” is not present in these five steps. The tone is positive and growth oriented.
What do you think could happen at Company X if customer complaints were approached from a creative rather than a problem-solving mindset?
Since its release in 2010, the Apple iPad has demonstrated its strength by dominating over 70% of the tablet computer market. During the fourth quarter of 2011 alone, Apple sold 15.4 million iPads.Most consumers use the iPad mainly for a combination of web surfing and online entertainment. In the business and commercial environment, the iPad is a partial replacement for a laptop, offering portability and ease of use. Of course, for those of us in the learning business, the iPad also offers a brand new and exciting delivery platform for eLearning.
I have received more and more requests from Fredrickson’s clients who are interested in delivering eLearning modules on the iPad. And we quickly have to come to terms with the major issue with the iPad as it relates to eLearning…you cannot run your shiny Flash-based courses on iPad. See my previous blog entries for more on this.
Apple’s decision not to support Flash player leaves us with only a couple of solutions, both of which have many pros and cons.
The first option is to develop the courses as “apps” that are native to iPad, and then push them to the learner’s iPad through the App Store. This process is usually expensive, time consuming, and it may bring up security, content rights, and confidentiality issues.
The second option is to develop the courses using HTML5 and CSS, and then deliver the courses online, either through a secure URL or an LMS. Fortunately, there are some rapid development tools on the market that offer the capability of converting the existing Flash courses to HTML5 format. The best examples are the just-released Articulate Storyline and Adobe Converter, also known as HTML5 Converter for Adobe Captivate 5.5.
So how well do these rapid development tools work when it comes to producing HTML5 content? I conducted a brief test run on both tools, focusing on the HTML conversion, web delivery, and LMS compatibility. For this testing, I developed two short eLearning courses with Storyline and Captivate 5.5. Both courses contained the same number of slides, visual assets, and interactions. Here are my findings:
I’ll start with an interesting observation. Apparently, both conversion tools only work in Windows. Even though the Mac user can run them on a virtual machine, it is a complicated process. I hope that a Mac version will available in the near future!
Overall, I would say it is slightly easier to convert the course using Storyline. I can publish the project as an HTML5 package right in the “Publish Settings” window.
Unlike Storyline, Adobe Converter is a standalone application. First I needed to publish my Captivate project in Web/Flash format. Then I had to import the published SWF files into Adobe Converter. Not a terribly arduous process, but it does take a little time.
Storyline did a great job in converting the whole course. Everything that I put in the course translated well to HTML5. There are no misalignments or cut-offs, and the buttons and interactions function very well. Storyline maintains the same playback bar in the HTML5 package, which provides the same user experience for the learners.
Adobe Converter takes a different approach by wrapping the converted course with a pre-defined framework. Its interface is very different from the Flash output. Given the complexity of Captivate 5.5, it is reasonable that not everything can be converted to HTML5. For example, the matching and hotspot interaction types in my test course did not convert at all. And there were some ActionScript errors showing up right after I launched the converter. Given that Adobe Converter is still in Beta evaluation, I believe these issues will be addressed in the final release, but anyone using Converter will need to carefully test their content to make sure.
The Storyline course loads faster, due to the small size of converted files. The package is also structured well. Basically, I can use the same package for all types of delivery, including Web/Flash-based platform, iPad, and LMS, by selecting different launch files in the package.
Storyline also introduces a specific tool to launch the course on the iPad. You need to download the “Articulate Player” app from Apple’s App Store. When launching the course, the player will remove all the web interfaces. It also offers the option to “Download the course to my iPad,” a feature which allows users to access the course even when they don’t have an active data connection. This is a brilliant idea because not all iPad users subscribe to a wireless provider’s data package.
In comparison with the package produced by Storyline, the Adobe Converter package loads much slower. I believe this may be due to the complicated file structure and considerably larger file size. The converted package can also only be used for mobile delivery.
Storyline supports all major LMS protocols, including SCORM 1.2, 2004 and AICC. I tested it with SCORM1.2 on Moodle, an open source LMS. It worked great with this LMS, and reported my learning status successfully.
Adobe Converter only supports SCORM 1.2 at this point. It worked well with Moodle. All communication and status reporting happened as expected. The loading speed, however, was still a huge concern. The course loaded even slower using the LMS. This could be a huge issue for the users who have slow Internet connections.
Overall, Storyline did a much better job in converting the course to HTML5 package than Adobe Converter. However, I have to address the fact that Adobe Converter is still in beta evaluation, while Storyline is officially released as a commercial product. There still could be changes to Adobe Converter that will improve some aspects of the product.
With the advanced features in Captivate, such as customized variable control, ActionScript functions and Widgets, it will take time to develop a mechanism for converting them to HTML5. Also, please note that Storyline costs $1,400 for a license, while Adobe Captivate costs only about half of that. Adobe Converter is free right now in its beta form.
Adobe released the CS6 series of many of its products, like Dreamweaver, back in May, and these new releases include many features to enhance mobile compatibility. For example, Dreamweaver CS6 has added templates for developing native apps for iPad/iPhone. Captivate, however, was an exception for this CS6 upgrade. The latest version of Captivate is still CS5.5. There are some discussions online about the possible new features that could be included when Captivate CS6 is released. Given the success of Storyline, I hope Adobe will add similar features in this next version of Captivate. Watch the Fredcomm blog for more information.
Each product and publishing format, of course, has its own interface design. Just for the sake of comparison, here are screen shots of the same course content from each of the tools in different published formats:
Captivate Flash output
Adobe Captivate HTML5 Converter on iPad
Stroyline Player on iPad
HTML5 Output Storyline on iPad
This is the third and last installment in a series of blog entries about the significance of changing terminology in the field of [insert your preferred term here] human-computer interaction, usability engineering, user-centered design (UCD), and user experience (UX).
This summer the organization formerly known as the Usability Professionals’ Association officially changed its name from the UPA to the UXPA: the User Experience Professionals’ Association. This seemed to be the definitive statement that user experience is now the term being privileged over usability.
The publication of Jesse James Garrett’s Elements of User Experience, which I mentioned in an earlier post, was a key point in UX becoming the new popular term and marked another step in the continued broadening of focus in the field of human-computer interaction (which itself seems an inadequate label now). The key term is of course experience. It suggests a concern not only with whether a user can easily learn and remember how to interact with an interface, and do so without making errors – the basics of usability – but also with a user feeling satisfied, even delighted, with their interaction. After meeting the goal of basic usability, the emotional elements of interaction – is an interface trustworthy and persuasive, even fun? – become more important.
So the shift from usability to user experience is not a defeat or death of the former term, as some have suggested. It’s more accurate to see it as a validation. We’ve accepted the value of making products, specifically digital interfaces, easy to use and now we’re pushing further toward trying to figure out what really delights users, what moves them. A key text marking this evolution is Susan Weinschenk’s Neuro Web Design: What makes them click? (2009).
An issue with the term user experience – as is the case with user-centered design – is that it’s somewhat open ended and can mean different things to different people. For example, a concern I have is its use in job titles and descriptions. I don’t always know what a User Experience Designer does, because it’s a role that could involve so many activities and skill sets: user research, design evaluation, information architecture, visual design, programming, and so on. I understand that companies want one person to be able to do all of this, but to be a specialist in all of these activities is challenging. Someone may be called a UX Designer and do visual design, but they don’t know which research methods are available to them, which ones to use when, how to properly gather input, or how to interpret the results.
The ongoing struggle with definitions, combined with technology changes and the continued broadening of focus in the fields of HCI and human factors, will mean that almost certainly we’ll see new labels to describe what practitioners do. For example, technology will likely force further specialization in narrower areas, such as mobile UX design. At the same time, there will be an interest in applying UCD principles in the design of an ever-wider variety of customer experiences as well as the processes that underlie those experiences. For this reason, I’m particularly interested in the concept of service design, which I believe has tremendous value in an economy dominated by the service sector, where in order to thrive companies must differentiate by providing superior customer experiences.
In contrast with user experience design, which usually focuses on the look and feel, structure, and content of a specific digital interface, service design focuses on the entire service interface. It is more holistic, including all of the customer touchpoints with an organization, such as offices or stores and the people who staff them, physical products, call centers, interactive voice response systems, correspondence, invoices, user assistance materials, as well as websites and applications.
Beyond these touchpoints, service design also seeks to assess and improve organizational processes and strategies that create the foundation for the service interface, because what usually leads to sub-optimal services are inadequate processes, and inadequate processes are usually a result of flaws in organizational structure and strategy. Conversely, superior services come from organizations that have been consciously designed to provide them.
The emergence of service design is indicative of a growing recognition that design is about much more than styling and cosmetics. The scope of design has broadened continually over the last several decades, from physical products to virtual interactions and experiences, and now to services. There has been a recognition that “design thinking” and the techniques that lead to good design are more widely applicable than was previously understood.
Whether the term service design really takes off in the US (it’s been more widely used in Europe so far), remains to be seen. However, I’m seeing more books and conferences with a focus on “customer experience” that take concepts and methods directly from the literature on service design.
Despite this exciting evolution, it’s apparent that many systems and products still need to pass the bar of basic usability. It’s also apparent that many product managers still think design is primarily about cosmetics, and that users – if they are asked at all – should be consulted only for validation just before a product is released. Whatever it is we call what we do in this still-young field, there are still many basic challenges that need to be met.
At the August 14 Intersect meeting about 50 of us got together to continue the discussion about using social media on public-sector websites. It can be a challenge to set up and maintain, but in many ways is definitely worth the effort.
We had three members facilitate our discussion, each from a slightly different perspective:
Please note that the content of all presentations is the property of their authors and may only be reused with their permission.
Thanks again to Lucinda Plaisance and the Metropolitan Council for hosting, to our facilitators, and to everyone who participated. It was an excellent meeting.
Mark your calendars for our next meeting on Tuesday, November 13, 2:00-4:00pm. See you then.
In a previous post, I discussed the rising popularity of the term user experience in contrast with the term usability and what this might mean. This got me thinking about other terms in human-computer interaction (HCI) that have gone in and out of fashion and why.
When I first became a practitioner of usability evaluation methods about 12 years ago, it was common for people in the field to be called usability engineers. Jakob Nielsen’s book Usability Engineering (1993) was a key reference for many of us. By describing ourselves as engineers, we wanted to suggest a methodology and toolset driven by data. The label was an assertion of legitimacy: as engineers, we had something valuable to contribute to design.
Use of the term usability engineering was in part a reaction to human factors, which suggested a concern with a narrower range of specialist environments and applications (e.g., power plant control rooms, airplane cockpits, medical devices). In contrast, usability engineering was focused on technology intended for a much wider audience of non-specialist users (e.g., desktop software and websites). The popularity of usability engineering was indicative of a gradual broadening of focus in HCI to include the needs and wants of more and different audiences as they interacted with new information technologies available to them beginning in the 1980s.
So what happened to usability engineering? The practices associated with that term are more widely used now than ever, so in a sense usability engineering is thriving, but hardly anyone calls it that anymore. Why?
Rather than engineers, people in usability roles thought of themselves more as adjunct designers. The process of designing interfaces, and who should be involved in that process, became a central concern in HCI. Vredenburg, Isensee, and Righi helped to explain and popularize the concept of “User-Centered Design” in their book of that title in 2002, and Jesse James Garrett published a key text called Elements of User Experience: User-Centered Design for the Web in 2003.
As those titles suggest, user-centered design (UCD) became the new it term.
When the Karats published their paper on “The evolution of user-centered focus in the human-computer interaction field” nearly 10 years ago, user-centered design was the hot concept of the day. They wrote: “We suggest that UCD is a good label under which to continue to gather knowledge of how to develop usable systems. It captures a commitment that the usability community supports – that users must be involved in system design – while leaving how this is accomplished fairly open.”
That commitment to involving users in system design hasn’t changed, yet the popularity of UCD faded, though it certainly hasn’t disappeared. One reason for this, I believe, is that UCD is not a step-by-step design and development methodology. Instead, it’s a set of principles at the core of which is the importance of gathering representative user input at key points during design and development. Principles are important, but I think many people wanted UCD to be more than a loose philosophy.
Another reason why UCD became a less popular term is that it was sometimes misunderstood to mean that users would dictate design and that the business objectives of the product or system were secondary to whatever users said they wanted. In fact, the importance of first defining business objectives is a key principle of UCD, but because the name puts users at the center, this point could be overlooked.
In the third and last entry, I'll look at why UX has become the new it term, and whether service design will catch on.
Editor's Note:This entry is part of the Fredrickson Thought Leaders in Learning series. For this guest blogging series, we've invited well-known experts in a variety of fields to address leadership-level learning and development professionals with their thoughts on topics of their choosing. Our hope is to prompt discussion around an expansive range of ideas and concepts.
I wanted to start this guest blog with a special thank you to Lola and her team at Fredrickson Communications for bringing together the learning community recently in a fabulous Learning Leader Summit. I am always impressed by the caliber of talent that we have in the Minneapolis area, and I enjoyed having the chance to connect with colleagues doing great work in their organizations.
Over the last 20 years my work as a professor, consultant, and executive has focused on leadership development and talent management. As such, I have done a lot of thinking and research into the increasingly critical question, “How do we accelerate the development of leaders?”
Since this is a short blog, I will fast forward to the punch line: I believe that you accelerate development by harnessing the developmental potential of on-the-job work experience. Now, if you are an adherent to the 70-20-10 concept, you may be tempted to stop reading here, because you already know that. After all, 70% of what leaders need to know to be effective they learn on the job, 20% they learn from relationships with others, and 10% they learn from formal sources. So what’s new?
Well, 70-20-10 is a great concept, but it’s not a great practice. The concept has made an important contribution by highlighting the fact that people learn a lot from experience, but it doesn’t provide any concrete guidance for practice. Namely, how exactly do you harness experience to accelerate a leader’s development?
The key to experience-based development is to foster a Learning Mindset
My work over the years has convinced me that while the best leaders have exceptional natural talents, they become great leaders because they approach their work experience with a Learning Mindset. The best leaders routinely:
While people can’t change their natural talents, they can get onto an accelerated path to success by learning how to practice the Learning Mindset of an effective leader. To do so, they need a language and mental model that enables them to think differently about work experiences—to approach them not just with a Performance Mindset, but also with a Learning Mindset.
FrameBreakingTM Leadership Development: a new way of thinking about work experiences
The FrameBreaking Model, developed from research on the careers of 101 successful leaders, is a simple, but powerful, tool for jumpstarting the adoption of a Learning Mindset. It provides individuals and managers a simple structure for thinking about work experiences along two distinct dimensions: Intensity and Stretch.
Intensity and Stretch are two powerful development dimensions that are often mixed together in our thinking about experiences. Yet, these dimensions are actually distinct—one can have experiences that are high in Stretch without being high in Intensity, and vice versa—and they drive very different development outcomes.
Combining the two dimensions creates the FrameBreaking model with four types of experience: Delivering, Mastering, Broadening, and FrameBreaking. The most transformational experiences are referred to as “FrameBreaking” experiences; because they are high on both dimensions, the individual must question their assumptions about how to achieve success and undergo a degree of personal transformation. Yet, FrameBreaking experiences involve a higher degree of risk than other types of experience.
The real challenge for leadership development professionals is not to find FrameBreaking experiences for all leaders, but to make better decisions about the kinds of experiences that leaders need, given their prior experience and personal aspirations. The model provides a new lens that can spark insight for individuals and help organizational decision-makers to ensure that people are getting the experiences (and learning) they need to be ready for the future.
About the author of this Thought Leaders in Learning entry: Mark Kizilos is the Assistant Dean for Executive Education at Carlson School of Management. For more information about the FrameBreaking Leadership Development approach, visit the FrameBreaking Leadership website. His new book, “FrameBreaking Leadership Development: Think differently about work expriences to achieve more, faster., is available from Amazon.com.
Our speaker, Kevin Wilde, did us all a great favor as he concluded this year’s Learning Leadership Summit. He built in time for us to reflect on our thinking from the day and make an action plan for executing once we got back to our normal routines. But ask yourself, are you being true to your intent and to your personal development in making that execution happen? Or did you just write it down and now have forgotten it?
Here is your gentle reminder to consider the follow-through. Executing on a half-day development program may be the hardest thing to do – especially for leaders who have to-do lists that are miles long and a seemingly never-ending list of “when I have time” ideas. So, take a minute. If you are reading this, take that minute now. What is the goal you wrote down for improvement? How will you practice this new skill?
I’ll share a personal story. My goal is to communicate powerfully and prolifically. One of my ideas for practice is to blog more – communicating about the things that are relevant and happening in our industry, publicizing what we at Fredrickson think and are passionate about, and fostering community interaction. This blog entry is a way for me to start my practice. I’m interested in the support of this community – through sharing and commenting on this and other blog entries of interest that you may find here.
That’s my personal story from the Learning Leadership Summit. So I’ll end with a call to action for you. I urge you to capitalize on the time you took for yourself to make sure you are following through on what you said you wanted to do. We’d love to hear and learn from you as well.
I have three kids and a husband. Without exception, they are easily and quickly drawn to the shiny new thing off in the corner. I can lose my husband for hours in a bookstore as he picks out books one by one and says, “Oh, cool!” My kids do the same in the toy aisle at Target. At some point in both of these occasions, the question becomes, “Can we have this?” And, as the person who often has to answer in the negative, the question in my head is: “Do we NEED it?”
As someone working in an ever changing and advancing field, there’s always the new shiny thing that’s set down right in front of us. And then there’s the shiny new thing that’s coming soon after that. We get excited—Mobile, HTML5, Storyline, and more that I can’t even begin to imagine. Even in instructional design there are new methods, techniques, ideas and so on. And we, being passionate about our field, want to use every one of them—NOW!
Long ago, I worked with a web developer to build a service website. About half way through telling him about our need, he was no longer listening. He was off in his world of possibilities. When it was his turn to speak, this simple website had become, in his mind, a universal portal available to everyone in the world with this and that and all these really “cool” things. I didn’t realize it until after, but this was a valuable life lesson—just because something is possible doesn’t mean you should do it.
Don’t get me wrong. I’m all about possibilities. But those possibilities have to serve a purpose. Cool for the sake of cool does nothing but increase your stress and budget. Because of this, I weigh every possibility on the following questions:
If the answers to any of these are “no,” I know to back away slowly from the shiny thing in front of me. It still is cool, but I can guarantee you, if it’s truly that cool, it will come up again. Just put the shiny new thing on a shelf and look for an opportunity to pull it down, dust it off, and use it on the right project.
Editor's Note:This entry is part of the Fredrickson Thought Leaders in Learning series. For this guest blogging series, we've invited well-known experts in a variety of fields to address leadership-level learning and development professionals with their thoughts on topics of their choosing. Our hope is to prompt discussion around an expansive range of ideas and concepts.
OD deals with developing effective organization systems, productive human interaction patterns and change processes that work. Talent management deals with the strategic use of talent involving effectively designing the organization and identifying, acquiring, integrating, developing, performing and retaining what’s needed. Like most things in organizations it all needs to start from strategy developed to align with anticipated changes in the environment and an organization designed to enable strategic execution.
With that basis, let’s look at how OD can support the work needed in talent management.
Identification: Given any mission, environment and strategy, what talent is needed to execute successfully? This can be viewed through competencies, experience, capabilities or characteristics. OD has special focus on design of processes for conducting work or facilitating diverse groups in developing consensus or making decisions.
Acquisition: Given the strategic needs, it is necessary to identify candidate sources, attraction strategies, diversity needs and recruiting and selection processes. Creating the inclusion of necessary perspectives and internal resources, designing needed systems and selection processes and ascertaining cultural fit can all be helpfully managed with OD support.
Integration: All talent needs to be brought onboard, integrated with other resources and participate in managed transitions among teams and with cultural adaptation. OD can provide expertise for systems design, transition processes, team development and culture assimilation.
Development: All talent will involve developing technical, relational and managerial competencies for current performance and emerging needs. Continuous changes create cyclical needs for new learning, new skills and new relationships. OD works with interpersonal, group, inter-group and social system roles, relationships and interaction patterns. OD can also assist in designing learning systems and development processes.
Performance: Managing performance is an age-old dilemma, but one aspect that is important is the relationship between the context and an individual’s performance. Besides knowledge and skills the work system, technology, social relations, management and feedback cycles can all affect one’s performance. How these get designed can make a difference and how the performance management system is designed can also be better or worse. OD provides useful skills for working on both of these needs.
Retention: When we have the talent we need, retention enters the picture. The problem-solving, monitoring and communications needed for retention can all be assisted with OD principles and practices. Meeting welfare, motivation, engagement and commitment preferences of desired talent gets into culture, policy, management and design arenas.
It’s easy to understand why talent management has become so central and critical. More and more organizations can only compete with their human capital, organization capabilities and execution, all of which are dependent on who you have and how well they can behave for your strategy.
All of these aspects need to work in alignment as an integrated system. Without OD in support, many of these functions can be under-optimized, fragmented, poorly designed or ineffective with the human resource base in the organization. We are entering an era when it’s becoming important to integrate the HR and OD mindsets and skill sets necessary for complete talent management.
About the author of this Thought Leaders in Learning entry: Dr. David Jamieson is Associate Professor & Department Chair, Organization Learning & Development, College of Education, Leadership & Counseling at the University of St. Thomas in St. Paul, Minnesota. He is also President of the Jamieson Consulting Group, Inc., Practicum Director in the M.S. in Organization Development Program at American University and a Distinguished Visiting Scholar in other OD programs. He has 40 years of experience consulting to organizations on leadership, change, strategy, design and human resource issues. He is a Past National President of the American Society for Training and Development (1984) and Past Chair of the Management Consultation Division and Practice Theme Committee of the Academy of Management.
A few months ago, I came across a blog post by Craig Tomlin that claimed “usability is dead,” and that it had been “killed” by user experience (UX). The evidence he presents comes primarily from Google Insights, Google indexed pages, and job postings on Monster, all of which show that the term user experience has been for the last several years a more popular term than usability.
This is true, but instead of saying that usability is dead – which is overstating it – it’s more accurate to say UX has subsumed usability. The practices associated with usability evaluation and usable design live on but increasingly under the rubric of UX. The interesting question that Mr. Tomlin doesn’t address directly is why this has happened.
I’d suggest that user experience has become the more popular term because it implies a broader, more aspirational goal than usability, even if the methods used to reach that goal are not new. UX represents another stage in the continued broadening of focus in the field of human-computer interaction (HCI), from ensuring basic ease of use to ensuring a superior experience across a growing range of products and systems.
In their 2003 article on “The evolution of user-centered focus in the human-computer interaction field,” John and Clare-Marie Karat make a crucial point about the type of name change in question (the bold emphasis is mine):
[E]ven though name changes may reflect very little in the way of real content changes for the underlying activities involved in developing usable systems, names can be valuable in communicating what one considers important. Within the general field of human computer interaction, we have many attitudes and approaches (e.g., participatory design, contextual inquiry, UCD). Though these names are all loosely united by a commitment to developing more usable human-computer systems, the focus of activity is reflected in the title.
The terminology shift from usability to user experience isn’t the first in the field of HCI and it won’t be the last. Other terms have risen in popularity and then faded and the same might happen to UX. In the next posts, I’m going to look at a couple of examples of terms that were once hot and then were not, and try to figure out what happened. I’ll also speculate about whether another term, service design, might catch on.
I’d love to hear your input along the way.
“I don’t know.” Not a very reassuring thing to hear from a consultant. After all, aren’t you paying a consultant to give you an answer? But in fact, that phrase is one of the “8 Great Things Consultants Say,” in a recent article by Jeff Haden for Inc. The reason he gave for including it is because a consultant willing to say, “I don’t know. Let’s figure it out.” is also more likely to have a collaborative approach.
For my part, I’ll add two more reasons; a) it’s honest and I like working with people who are honest about their work, and b) a consultant’s job isn’t to know everything—it’s our job to learn. We study and observe. Then we take that information, add in some of what we do know, and make recommendations that will meet your specific needs. If we knew everything, there would be a simple formula for every problem and you could buy a book of formulas and viola! You’re problems are solved! But, chances are if you’re hiring a consultant in the first place, you’ve read a few books, tried a few things, and nothing worked.
Long ago, I learned a model for discussion called the “Know/Don’t Know” circle. The idea is simple enough. Draw a circle and ask: “If this circle represents everything there is to know in the entire world, how much of it do you think you know?” Fairly quickly your group will say a very small percent, which is then drawn on the circle. The next questions are how much do you know you don’t know, and how much do you think you know. The latter is usually followed by a chuckle at the truth of the question. At this point, a little less than half the circle is accounted for. Then comes the big question: “What does the remaining part of the circle represent?” The answer is that it represents the things in this world that we don’t even know we don’t know.
I go back to this often. Especially the concept of just how many things there are in this world that I don’t even know that I don’t know them. I don’t even have a frame of reference in order to be able to identify that I don’t know them. For me personally, I keep this in mind when I’m creating something. I often stop myself before I get to a completion point and ask someone else for his or her perspective. Maybe they know something I don’t. And, maybe by learning it from them, my project will be even stronger than it would have been if I’d kept going myself.
This is exactly why I love working with clients who engage with us for a Discovery Phase of their project. To me, the Discovery process is a meeting of the “don’t know you don’t know” part of our circles. You know your subject, I don’t. I know Learning Strategy, you don’t. You help me understand your content, and I’ll help you achieve your learning (and ultimately your business) goals. It allows us to join our two sets of knowledge at a place that is full of possibilities.
A big thank you to all those who attended our Online Forms Extreme Makeover webinar on June 20. Rebecca and I both hope this webinar helps you to improve your users' experience with online forms.
If you need help making over a tough online form, or any other aspect of improving your agency's user experience, we can help. .
If you'd like to take another look at the form makeovers and the other information in the slide deck, here it is.
And if you would like to watch the whole webinar again or share it with colleagues, here's the recorded version.
Stay tuned for more information about upcoming webinars in the Fredrickson Communications public-sector webinar series.
Fredrickson affiliate Gerry Wasiluk described the eLearning field as “being in a crazy time between old and new technology” at the June 12 FRLL Storyline SIG meeting. Gerry was at the meeting to inform Twin Cities Learning Leaders about Storyline, the new “rapid” development authoring tool from Articulate. He illustrated his theory regarding the state of eLearning with overlapping circles. The “crazy time,” he said, was at the center.
After hearing Gerry talk about the features and capabilities of Storyline, and after using the software program, I’m anxious to use it to create an eLearning project. I’m excited about the tool’s potential, because it pushes beyond the limits of other eLearning authoring tools.
Gerry, who also was involved with beta testing the software, called Storyline “intoxicating.” Then he cautioned SIG attendees when he said: “Stick to your [eLearning] designs.”
Gerry’s right. Storyline is just a tool (albeit, a very nice tool!).
Generally, eLearning designers agree with Gerry’s assessment regarding the impact of ongoing and rapid changes in technology on our work: this IS a crazy time in the eLearning field. It’s also an exciting time. I see Storyline as a tool that instructional designers and developers will use to move from where we’ve been to where we want to go with eLearning development—toward more engaging and interactive eLearning.
Storyline will make the trip easier!
Note: The little scenario you see here was created in Storyline for this post.
Update: Links in the PowerPoint show below are now fixed. Download away.
Thanks to everyone who attended the Introducing Articulate Storyline SIG this morning. We also want to thank our SIG host, Allianz, and our discussion leader Gerry Wasiluk.
And now what you've all been waiting for: The link to Gerry's Storyline PowerPoint show.
If, by chance, you just want to head straight to The Green Monster demo, here's a shortcut.
Articulate released the highly-anticipated Storyline product about a month ago. In doing so, the learning community now has a stunning new tool for extending the concept of rapid eLearning development into a professional development suite. I was invited to be a beta product tester for Storyline and based on that experience, here are my top 5 favorite new features in Storyline:
Overall, I am excited that Storyline is finally available to all eLearning developers. If you would like to know more about this new tool, you are welcome to join the Fredrickson SIG featuring Articulate Storyline, presented by Gerry Wasiluk, Fredrickson Affiliate and Articulate MVP, on June 12th.
Man is a game playing animal. He must always be trying to get the best of something or other. ~Charles Lamb in “Essays to Ella” (1775-1834)
We grow up playing games. We play ball games at school and board games and card games at the kitchen table. Many of us played games in the neighborhood after school or the evening meal, too. Today’s virtual reality games give us even more game-playing options. We like to play games because they challenge us—and because we have more fun when we acquire new skills. One important element of many games is strategy. Unfortunately, strategy is often the last thing we learn about when playing games. Having fun and winning are more immediate and often seem more important than learning strategy.
My interest in the strategic aspect of games was sparked a couple years ago, when I noticed how often people who win in their personal and work lives also played sports in school. I was intrigued and wondered what advantage, if any, children who play school sports have that other children don’t. I concluded that coaching is the advantage sports offer. Coaches teach strategies that help players win. When winners transfer their skills and attitudes to their day-to-day activities upon entering adulthood and the workplace, they continue to win.
Unfortunately, games are something most of us think of as entertainment. We pay to watch other people win. We begin to acquire this attitude toward play in school, because education and play are, in the minds of many educators, like oil and water. They don’t mix together well. Play is frivolous. Learning is important.
This attitude sticks with us like glue when we leave school and enter the workplace, where we’re supposed to be responsible adults. It’s no secret that responsible adults don’t play, they work.
In fact, the only way a game has a chance of making it in to the workplace is if it’s a “serious game.” What is a serious game? It’s an interactive computer application that has a challenging goal, is fun to play, incorporates scoring, and imparts to the player a skill, knowledge, or attitude that can be used in the world. Serious games were first developed and used for military training, and simulations are in wide use today. But to produce a simulation game, the developer must have knowledge of programing and learning principles, as well as specialized software. Games must be based on sound learning principles. Otherwise, they’re only entertainment.
I don’t have programing skills or the right software, so I won’t be creating simulations. But the evidence that games can teach people how to be strategic decision makers is too compelling to ignore. So I found myself wondering about what else is possible? I’ve found some answers. Scenario-based learning and case studies can be used to teach decision-making skills. I’ll be exploring each of those and writing about what I learn in the weeks to come.
We’ve been doing a bit of discussion lately at Fredrickson about what the purpose of a blog is for us. Is it the format where we want to post complete articles? Or is it a way in which we want to share our ideas, thoughts and curiosity about our profession in all of its facets? Do we want to only publish when we have complete thoughts and answers to questions, or do we want to cultivate the discussion about topics that attract our, and others in our communities, attention?
I believe the answer is yes to all of these questions. But what sometimes holds us back is that we feel it is a YES to the idea of sharing complete thoughts and articles, but we’re not convinced that we would give as resounding a YES to the other questions. I know that we want to show vibrancy in our processes and thoughts about our profession. We want to spark conversation and discussion. We want to connect with others and share our collective wisdom – and learn from your thoughts and comments.
So expect to see changes here to our blog. You’ll start to see more posts, more often, from more members of our team. And we hope to inspire you to respond – to continue the conversation, to let us know where you think we should modify our thinking, and to hear where our thoughts further inspire you.
Editor's Note:This entry is part of the Fredrickson Thought Leaders in Learning series. For this guest blogging series, we've invited well-known experts in a variety of fields to address leadership-level learning and development professionals with their thoughts on topics of their choosing. Our hope is to prompt discussion around an expansive range of ideas and concepts.
Despite economic conditions, unemployment levels, or any other business factor imaginable, your best employees – the ones you need most – want one thing from you, plain and simple: to support their growth and development. Study after study confirms that development is the single most powerful tool managers have for driving engagement, retention, productivity, and results. Yet, learning leaders know that career development is frequently the thing that gets sidelined unless or until the organization demands that some form be submitted during regular review cycles.
My new book (with co-author Julie Winkle Giulioni) sheds a much needed light on specifically what managers can do – within the time-starved, priority-rich, pressure-cooker environment in which they operate – to support employees’ careers. And it comes down to this: engage in short, ongoing conversations with employees about their career options, needs, and passions.
It’s really that simple... and that complex.
How do you, as a leader, start engaging your employees in career conversations? Here's a preview of the first two chapters of my soon-to-be-released book to get you started.
Our congratulations to the City of Woodbury and its Communications Coordinator, Jason Egerstrom, for winning an Award of Excellence in the websites category in the Minnesota Association of Government Communicators (MAGC) annual Northern Lights Awards competition. The City’s website was also nominated in the Best of Show category. We were happy to have played a role in providing user experience testing and consulting to the City as the new site was being designed and developed. We’ll keep our fingers crossed for that Best of Show award, Jason!
And we congratulate the Minnesota Department of Revenue (MDOR) on recently launching its new website and the positive coverage in the Minneapolis Star Tribune. We worked with the Department’s web team and the developers to conduct user research, user experience testing, and consulting. Back in 2009 I blogged about our findings from audience research that users preferred a stronger audience-based navigation format for the MDOR site. Audience-based navigation is often not the best approach, but it works well for many government agencies because of the diverse audiences those agencies serve, each of which often has very different concerns.
Redesigning and developing a new website for any large organization is a huge endeavor – bigger than is often realized by teams doing it for the first time. And for public sector organizations, the scrutiny can be all the greater for obvious reasons. But given the benefits of providing excellent online self-service options, the effort and occasional headaches are well worthwhile.
Congrats again to Woodbury and MDOR on a great job!
First, some quick stats:
In addition to the growing numbers, it’s important for web managers to be aware that, as Josh Clark says, “Anything that a user can do on mobile, they will do on mobile.” The first of Clark’s Seven Deadly Mobile Myths is that there is no stereotypical use case for mobile.
Given that millions of us are already using smartphones and tablets, and many more of us will be using them over the next few years, what are government organizations in Minnesota doing with their sites and applications to respond? That was the core question at our last Intersect meeting. It’s worth pointing out that with respect to web matters, government agencies in Minnesota are generally leaders, not laggards.
The results of a survey on mobile that we distributed to our Intersect membership a few weeks prior to the meeting show that most government organizations in the state are still in the early planning stages.
Responses to an open-ended question about key challenges in adopting to a more dynamic and mobile web environment indicated a lot of uncertainty about where organizations would find the time, budget, and resources (either internally or via contractors) to handle new design and development work. There were also concerns from several organizations about whether a mobile audience truly exists for their content.
We were fortunate to hear from three early adopter organizations about what they have developed and what they have planned.
John Siqveland, Public Relations Manager for Metro Transit, demonstrated Metro Transit’s mobile website, showing how customers with smartphones can have fast access to tools like NexTrip, to get real-time bus departures, and Trip Planner. John explained that Metro Transit so far has not developed apps for particular devices but has instead made data publicly available for app developers at datafinder.org. Check out the range of metro transit apps that have been developed so far.
John suggested that it may not be long before riders can use their phones to interact with a chip or image (or whatever ends up replacing QR codes) at transit stops to get real-time information.
Jed Becher talked about the DNR’s Android app, LakeFinder, their cross-platform app, MN Water Access, which locates public water access points, and a Fall Colors mobile site. All are available from the DNR's mobile apps page.
LakeFinder was developed as a proof of concept for the department, showing how mobile development could be done. As of April 2012 they have had nearly 17,000 active device installs. Jed said that the DNR is currently working on an HTML-based replacement for LakeFinder. They are also adding an HTML-based “Where am I?” mobile web version of the Recreation Compass to assist citizens in determining if they are on public land or where the nearest public land is located. The Department is also making high-use pages on the site more mobile friendly.
J. Hruby, Fredrickson’s VP of Sales and Marketing (and our most avid outdoorsman), congratulated the DNR for its work on LakeFinder. J. made an excellent point about how effectively the DNR has won fans in the public because of tools like LakeFinder, to the point where they are happy to pay higher fees to continue getting such great service.
We ended the session with Marc Drummond, Web Technologies Coordinator for the City of Minnetonka. Marc was the person who introduced me to Ethan Marcotte’s concept of responsive web design over a year ago, and he is now redesigning Minnetonka’s website using responsive design techniques. Marc shared a beta version of the new site during his session.
Marc began his talk by referring to Stephen Hay’s famous tweet from January 2011: “There is no mobile web. There is only the web, which we view in different ways. There is also no desktop web. Or tablet web. Thank you.”
What designers like Ethan Marcotte, Stephen Hay, Josh Clark, and Marc Drummond suggest is that we “shouldn’t be developing completely separate mobile websites, or iPhone websites, or iPad websites, where well defined universal websites would suffice” (Josh Clark). Instead, there should be one web. As Marcotte wrote in his pioneering article on responsive web design: “Can we really continue to commit to supporting each new user agent with its own bespoke experience? At some point, this starts to feel like a zero sum game. But how can we—and our designs—adapt?”
The three technical ingredients of responsive design that Marcotte describes, and that Marc explained in his talk, are fluid grids, flexible images, and media queries. It also requires a new way of thinking. “Now more than ever, we’re designing work meant to be viewed along a gradient of different experiences. Responsive web design offers us a way forward, finally allowing us to “design for the ebb and flow of things” (Marcotte).
We'll be revisiting the subject of mobile user experience many more times. For now, check out Jeff Zeldman's excellent list of mobile web resources and best practices. This is a great place to start digging in to learn more.
Coauthored by Al Watts, founder of inTEgro, Inc.
Editor's Note: This entry is part of the Fredrickson Thought Leaders in Learning series. For this guest blogging series, we've invited well-known experts in a variety of fields to address leadership-level learning and development professionals with their thoughts on topics of their choosing. Our hope is to prompt discussion around an expansive range of ideas and concepts.
Who’s the wisest person that you know? Why does that person come to mind, and what are some characteristics of other wise people you know?
Competency, skills and expertise are desirable, but cannot take the place of wisdom. There are competent, highly skilled and even expert sailors, for example, who may not be wise. There is a saying among Lake Superior sailors that comes to mind: “The Superior sailor uses superior judgment to avoid situations that require superior skills.” For an example closer to home, if some organizations in the news lately had exercised more wisdom, they likely would have saved a bundle on legal fees.
As we think of truly wise professionals that we know, here’s what comes to mind:
What’s the big deal with wisdom, and why be concerned about it? For one thing, many of our wise human resources are heading out the door from attrition or retirement. “Knowledge management” was a hot topic a while back, and now “talent management” carries the day. What about “wisdom management?” What are we doing to acquire, cultivate and retain wisdom in our organizations?
Whether in-house or contracted, wise resources contribute value that is distinct from merely competent or even expert talent. Their depth of experience and personal characteristics bring a different dimension to problem solving. Instead of merely helping solve problems, they help us discern which problems are worth solving or how to avoid them in the first place. Competent, skilled or expert resources can answer our questions; wisdom helps us make sure that we are asking the right questions.
When facing a challenge in your organization, make sure there’s wisdom on your team. Sometimes an outside view helps – fresh eyes that have seen a lot and bring new perspectives, making sure that we’re asking the right questions and solving the right problems. We need to give more thought to the role of wisdom in our work and organizations – when we need it, how to get and grow it, how to leverage it and how to retain it.
Would others describe you as “wise?” What can you do to cultivate your own wisdom?
How can you cultivate, retain and leverage wisdom in your organization?
"The young man knows the rules, but the old man knows the exceptions0". -- Oliver Wendell Holmes
About the authors of this Thought Leaders in Learning entry:
Al Watts is a veteran consultant and author of the book Navigating Integrity – Transforming Business As Usual Into Business At Its Best (Brio Books, 2010.) Al is the founder of inTEgro, Inc.
Lola Fredrickson is Chief Executive Officer of Fredrickson Communications.
We discussed the usability of Learning Management Systems (LMS) at the April meeting of the Fredrickson Roundtable for Learning Leaders. As part of the discussion, I led a live demonstration of our usability testing process and reported on the results from sessions with two other testers that took place earlier.
The LMS we tested was a popular large-scale LMS and the member company that volunteered their LMS for testing has had this system implemented for a number of years. I don’t think specifically naming the LMS is of any benefit because the issues that we uncovered are certainly not unique to this LMS. What we found were some common usability issues that occur in a wide variety of systems.
This (admittedly brief) round of testing uncovered three issues where the LMS featured in our usability test could be improved:
However, for the LMS that we tested, the process for launching the course required the user to click a button labeled “launch,” and then to click a link labeled, you guessed it, “launch”. All three testers remarked on this redundancy. As one tester put it, “If you ask me to launch again, I might say no!” This kind of extra click frustrates users because it feels like an inefficient use of time. We recommended linking the first launch directly to the course.
Why? The experienced users were familiar with scrolling through the general list of courses, where they could launch a course they had enrolled in, but not yet started, by clicking the Launch button. They therefore expected to scroll through this same list to find a course they had started but not yet finished, by clicking a button labeled “Re-launch.” They did not realize they had to go to a separate page to find the course they had started and then click a link to resume the course.
The lack of consistency in handling a course relaunch made this task especially difficult for them. We recommended adding a Re-launch or Resume button for courses that are in progress.
Although this was a small demonstration usability test with only three participants, we were able to uncover three significant issues needing improvement in the LMS in focus. The facilitated discussion that followed the demo test made it clear that usability is an issue with most learning management systems.
Even though an LMS is often purchased from a vendor, and therefore the purchasing company does not have direct control over all aspects of the LMS interface, usability tests can be well worthwhile. The results can be used to negotiate when renewing an LMS contract or, even better, to help evaluate an LMS before purchase. If changes can’t be made, identifying potential difficulties ahead of time can help shape documentation and rollout messages related to LMS deployment.
For more on the topic of LMS usability, see John Wooden’s article How Do We Improve the Learner Experience of LMS’s?
Most human resource development professionals associate integrity with ethics, define it as some variation of “just doing the right thing,” and believe that you can’t really train people to have integrity anyway. Besides, how does integrity really impact the bottom line?
Integrity in the context of ethics or morality is really only one of a standard dictionary’s definitions, though, and not even the first. Its first and second definitions are about being “complete,” “whole,” “unbroken” and “perfect” – concepts that have more to do with effectiveness than just ethics. Think of “product integrity,” “design integrity” or “supply chain integrity,” and the whole picture – including HRD’s role – becomes clearer.
What do product and design integrity look like for training professionals? “Form follows function,” so a first consideration is clarity of purpose.
We can translate purpose here to include the mission of our function or role, as well as things like the goal of an intervention or the learning objectives of a training program. We model integrity when our actions, products and services fit the intended purpose. Training modules display integrity of design when they fit together and as a whole accomplish learning objectives within whatever budget and other parameters we have. Integrity for trainers includes accountability for results; we know that implies more than just “smile sheet” evaluations.
The greater our experience and responsibility in HRD, organization development and broader HR roles, the broader or more “whole” our perspective on integrity needs to be. Big picture-wise, HR and HRD’s overall purpose is aligning, or integrating, the people domain with the business or purpose of their organization.
Integrity, or “form following function,” in that regard means that talent acquisition, organization design, performance management, development and other HR practices need to fit, or be aligned with the organization’s mission, values and strategy. From my experience both inside organizations and as a consultant for nearly thirty years, I know of fewer ways to stifle effectiveness and engagement more than disconnects between stated purpose or values and actual organization or leader practices. Ralph Waldo Emerson put it this way: “Your actions speak so loudly, I cannot hear what you are saying!”
Even though it’s only one aspect of integrity, HRD and HR professionals cannot overlook its ethics and morality dimensions. We certainly have enough examples of how illegal, immoral and unethical practices have derailed organizations and leaders – Penn State, Lehman Brothers, Washington Mutual and Goldman Sachs to name just a few of the latest. We cannot sit by and assume that a combination of laws, regulations, risk management specialists and legal advisors will take care of all that.
The Sarbanes-Oxley Act of 2002 was passed shortly after the Enron debacle, and was an attempt to legislate ethical corporate practices. Then came the 2008 economic meltdown, fanned by “creative accounting,” lack of transparency, blatant conflicts of interest and plain old greed. “Inside-out” approaches to creating ethical cultures are always more sustainable than attempts to legislate ethical behavior.
HR and HRD professionals can play a pivotal role crafting ethical cultures from the inside out by helping their organization navigate these dimensions:
Perhaps the multiple ways that integrity impacts ethics, engagement and effectiveness account for Noel Tichy’s perspective is that “Integrity is the cornerstone of free enterprise, and every leader needs a clear teachable point of view on it.” Human resource and human resource development professionals will benefit by adopting that perspective and positioning integrity centrally in their own strategies.
About the author of this Thought Leaders in Learning entry: Al Watts is a veteran consultant and author of the book Navigating Integrity – Transforming Business As Usual Into Business At Its Best (Brio Books, 2010.) Al is the founder of inTEgro, Inc.
Back in January it was reported that President Obama’s State of the Union address was written at an 8th grade reading level. In fact, a Smart Politics study of the 70 orally delivered State of the Union Addresses since 1934 found “the text of Obama's 2012 speech to have tallied the third lowest score on the Flesch-Kincaid readability test, at an 8.4 grade level. Obama also delivered the second lowest scoring address in 2011 (at an 8.1 grade level), and the sixth lowest in 2010 (at an 8.8 grade level).”
It makes sense for President Obama to focus on readability. He signed the Plain Writing Act of 2010, which requires the federal government “to write new publications, forms, and publicly distributed documents in a clear, concise and well organized manner that follows the best practices of plain writing.” I’m all for plain language and I applaud organizations that embrace it. (Kudos to Hennepin County in Minnesota for following the Plain Language law even though they don’t have to.) But I question the value of using automated tests to assess readability and I doubt the meaningfulness of the reading grade levels these tools spit out.
The Flesch-Kincaid test is a tool that many of us have used to calculate the reading grade level of text we’ve written or edited. But what do automated readability tests really tell us? What’s it mean when they say some piece of text is written at an 8th grade level? In truth, not much.
Automated readability tests are based on formulas and the formulas are based on elements that can be counted. The Gunning-Fog index, the SMOG (Simple Measure of Gobbledygook) test, and the Flesch reading ease scale are all based on counting words per sentence and syllables per word. In addition, as Janice (Ginny) Redish has pointed out*, the assumption underlying readability formulas – “that any text for any reader for any purpose can be measured with the same formula” – is simply invalid. Redish notes that automated readability tests leave a lot of questions unanswered:
Needless to say, these questions are all very pertinent in assessing readability, but they don’t lend themselves to simple counts. And there are other substantial weaknesses, too:
The simple answer is that it provides a much more comprehensive and accurate gauge of how your users/readers perceive your text. I’ll give you an example.
A while back, I subjected some text that is sent to applicants for Minnesota unemployment benefits to the Flesch-Kincaid test to find out the reading grade level. It was high, too high. One of the reasons for this was the common use of such four-syllable words as “Minnesota” and “unemployment,” and the three-syllable word “applicant.” But would typical applicants struggle with these words? No. How do we know this? Because we did usability testing with actual applicants. A two-syllable word, “appeal,” caused more difficulty than the longer words that would be flagged in an automated test. But what testers struggled with more than any particular word was the order of ideas. The correspondence followed a general-to-specific structure, beginning with background and reference to a Minnesota statute, before going on to explain the specific determination of the applicant’s eligibility for unemployment benefits. What our testers told us was that they wanted to see this order reversed: start with what’s specific to the recipient – their eligibility – and then go on to the general and background information. We suspected this would be the case going into the tests, but it was good to have it confirmed by actual applicants. In any case, no automated readability test could have helped either identify or solve this problem.
Similarly, the biggest issue with another piece of correspondence was tone. Automated readability tests will not tell you anything about that either.
A usability test with a special focus on readability will tell you much, much more about how actual readers perceive your text than the narrow focus on syllables and word count in an automated test. They are quick, easy, and free, and they have the allure of quantitative data. But you get what you pay for.
It’s interesting that President Kennedy had the highest average reading grade level in his State of the Union addresses: 13.2. But JFK understood the power of rhetoric pretty well, and I’ll bet that despite the high reading grade level, some of his words stick with you.
*"Readability Formulas Have Even More Limitations Than Klare Discusses," ACM Journal of Computer Documentation, August 2000/ Vol. 24, No.3.
Author’s Note: This blog entry is part of a series I started to explore two of today’s most popular eLearning rapid development tools: Articulate Studio and Adobe Captivate. Here is a link to an article that contains the whole Articulate vs. Captivate series.
In the previous blog entries, we have explored the major features of Articulate and Captivate, and discussed the strengths and limitations of each tool. Of course, there really isn't a winner. As I wrote at the beginning of this series, the only answer to the question “Which is better?” is “It depends.” The tools have different strengths and the best fit depends on your needs.
And for larger organizations or those with more complex or varied learning needs, the answer to the question “Which should I buy?” is often “Both.”
I've created a summary chart that I think clearly highlights the strengths of the two tools. Of course, some of these items can’t be reduced to a simple yes-or-no answer, so in some cases this chart simply reflects my opinion.
In 2012, we will see new players joining the rapid eLearning tool game. For example, Articulate Storyline and ZebraZapps are already attracting a lot of attention. There is also the possibility of new releases of Articulate Studio, Adobe Captivate, and SmartBuilder.
One of the interesting trends that we have noticed is the rise of mobile learning, and how the rapid eLearning tools are quickly incorporating functionality that gives them the potential to create mLearning content. For example, most of the new tools can publish your project as HTML 5 or in the mp4 video format. This gives eLearning developers an easier path to get a course running on Apple mobile devices such as the iPad.
I expect to see more projects developed with these new tools in 2012 and I will be using them myself for Fredrickson's Learning business. As always, I'm glad to share my thoughts and findings with you and I appreciate your comments on these blog entries.
Thanks to everyone who attended our Self-Service Nation webinar.
During the webinar, I mentioned some of the navigation and other usability problems we found during a test we did on the City of Los Angeles' website. Here are some video clips from that test, so you can see the issues firsthand:
And here's the Self-Service Nation slide deck (.pdf). Again, thank you to everyone who attended. J. Hruby and I had a great time presenting and we hope you found it informative. Stay tuned for more webinars in the Fredrickson User Experience Webinar Series.
As we learned recently, Adobe has decided to stop releasing the new Flash Player for mobile devices after version 11. With so much eLearning courseware developed using Flash-based technologies, this announcement has naturally caused some turbulence in the learning community and raised some concerns about the future of online learning technologies. I had a good conversation with one of Fredcomm's best Flash developers last week about the future of development tools and trends for both mLearning and eLearning. Here is a summary that I'd like to share with you:
While the announcement about the mobile Flash player got a lot of attention, we believe that many may be reading more into Adobe's decision than is really warranted at this point. Adobe may be changing their direction as it relates to mobile devices, but this doesn't mean the end of the web as we know it.
It is certainly time to start thinking and learning about technologies like HTML5, but announcement of the discontinuation of the Flash mobile player doesn't mean that Flash is going the way of the dinosaurs.
Thanks to everyone who attended our seminar yesterday at the Minnesota IT Government Symposium. I hope you found it helpful and informative.
For those who didn't get a copy of the handouts, you can download a copy here.
Our November 8 Fredrickson Intersect meeting featured volunteer speaker Nancy Hoffman of the Minnesota Historical Society. Nancy reviewed with us the advantages and challenges of making available the data owned primarily by public-sector organizations. When given visibility and straightforward access, these volumes and volumes of data can be reused in ways that may make a significant difference in people’s jobs and lives.
You may view Nancy’s presentation and notes here.
See you at our February 14 meeting!
mLearning, or mobile learning, is not really something new. The origins of mLearning trace back to the 1970s or even before, when people took courses using audio tapes. Today, the evolving technology has made it more feasible and effective to deliver a richer learning experience through mobile devices such as smartphones and tablet computers. Some companies even have started offering various series of training courses using employees’ iPads.
mLearning offers exciting possibilities, but there are some basic considerations and differences that I think learning professionals need to consider or understand.
With newer and better mobile computing devices coming on the market every day, it’s very easy to become distracted by the technology side of mLearning. Remember, mLearning is still learning and the needs of the learner still need to be the first consideration.
You must ask the all-important questions: who is your audience? What do they need to learn on the go? Why do they need to learn it? The learners who choose mLearning usually do so because their workplace is not a fixed location, or their work environment doesn’t provide a good setting for learning.
Also keep in mind that many learners today prefer to learn small pieces in short intervals rather than take one long course. This is especially true for mLearning fans and it requires courses to be structured differently than conventional learning offerings. Additionally, mLearning can’t always be highly interactive or media-rich, but it has to find the right balance so that it meets the learner’s needs and expectations.
Now let’s look at the technology side of mLearning – the facet that is changing nearly every day. There is a huge variety of mobile devices and many variations in their capabilities; these facts can make developing mLearning quite challenging. For example, you can view your eLearning courses on a 10.1” Android tablet without a problem. But, the same course won’t look good, or even be readable, on a 3.5” Android smart phone.
Another issue is the media players that are standard on mobile devices. As you probably have already heard, your existing library of graceful Flash-based courses won’t work on Apple mobile devices such as the iPhone or the iPad. But they will work, at various levels, on the Android devices. You may have also heard that HTML5 is a “replacement” for Flash.
It’s important to understand the current state of HTML5. HTML5 could eventually be a solution to develop and launch courses that work across mobile platforms without dependency on the players. However, right now HTML5 is still immature and it has a long way to go before it becomes standardized and functional enough to be used in the way that Flash is used today. I offer this as general view of the current state of HTML5, and only to emphasize that right now this technology is not in a position to do what many have heard that it does…or could eventually do.
Another consideration is the way that touchscreens operate mobile devices. This is a very different method of user input compared to the more traditional mouse and keyboard input methods around which most eLearning is currently designed. In most cases, course developers and designers will encounter new challenges in developing mLearning interactions. Touchscreens also introduce unique usability issues. For example, the buttons have to be big enough to allow finger touches or movements and these bigger user controls therefore have an impact on the available screen real estate. The bottom line is that existing eLearning courses cannot really be “converted” to be effectively delivered as mLearning offerings. Because of the differences in learner needs and expectations combined with the different capabilities of mobile devices, courses need to be developed or redesigned specifically to function as mLearning offerings.
Finally, let’s look at the security issues. Many of the corporate eLearning courses and talent development curricula contain confidential or proprietary information. They cannot be pushed to the learners through the public app stores as conventional mobile apps are. It’s difficult to say much more in general about security concerns with mLearning other than to point out that the considerations can change once learning products go mobile. Essentially, learning and IT professionals need to consider what (if any) security risks are presented by introducing an mLearning offering, conduct appropriate testing, and take actions to make sure the required level of security can be maintained.
Thanks again to Jed Becher, Web Coordinator at the Minnesota Department of Natural Resources, for putting together the presentation for our August 9 Intersect meeting. Also, thanks to Rachel Dobbs, FredComm’s own analytics guru, for chiming in on how the Webtrends tool works in comparison with Google Analytics.
So many of us are curious about how analytics work, what we can use them for, how we can report on them to better serve our audiences and our marketing efforts…the list goes on. Jed’s overview gave us all a great start on understanding all of these complexities.
As promised, Jed provided links (from a DNR server) to his presentation as well as to an electronic version of Portent Interactive’s coding “cheat sheet” that he passed out in the presentation. The e-version can be expanded to a larger, more readable font so that you can really use it!
Link to the PowerPoint presentation
Link to Portent Interactive’s “cheat sheet”
Also, many of you already saw the email I sent with information from Marc Drummond of the City of Minnetonka about two upcoming Association of Webmasters events. Here’s another quick shout about both of these:
See you at our next Intersect meeting on November 8!
Author’s Note: This blog entry is part of a series I started to explore two of today’s most popular eLearning rapid development tools: Articulate Studio and Adobe Captivate. Here is a link to an article that contains the whole Articulate vs. Captivate series.
In my last blog entry in this series, I explored Articulate Studio in more detail. Now it’s time to do the same with Adobe’s Captivate.
Captivate is a comprehensive rapid eLearning development tool for creating software demonstrations, interactive simulations, and quizzes. Compared to Articulate Studio, Captivate offers a better workflow to take the developer from screen recording to the process of interaction building. Most Captivate projects follow the “see it, do it” approach. In the “see it” segment section, the learners watch a recorded demonstration. In the “do it” segment, the learners complete a series of tasks in the simulated environment -- for example, adding information to a customer’s account.
Like Articulate Studio, Captivate provides the users with some essential functionality, such as customized skins so that the look and feel can be modified. It also offers text/graphic animations, audio synchronization, interactive components, and publishing options for both web and LMS delivery.
Let’s take a closer look at these features.
To enrich the functionality of Captivate, Adobe has developed some add-on applications, such as text-to-speech, widgets, a review tool, and a quiz result analyzer and aggregator. Developers can find even more add-ons from Adobe Exchange server. Articulate has a similar online community, and encourages the developers to submit their customized interactions.
The main difference that I have observed between the two online communities is that the Adobe Exchange community tends to be more willing to share code and methods for free. Of course, these are often just the starting point, the developer then needs to finish the object. The Articulate community members, on the other hand, will often offer finished enhancements such as interactions, but because these are finished objects that took larger amounts of time to create, the members often want to charge a fee.
After comparing Articulate and Captivate side-by-side, we have seen a lot of similarities and a few significant functional differences. One of the biggest differences I can highlight is the development process and the mindset it takes to get the most from these tools. In the next entry, I will conclude this Articulate vs. Captivate comparison series by discussing my views of the circumstances and uses where I think each of these tools excel.
Marshall McLuhan would have turned 100 years old last Thursday, July 21. What would he have made of a world of smartphones and Facebook and nanotechnology?
Many of us today associate McLuhan with a couple of catchphrases – “the global village” and “the medium is the message” – and not much else. Even though Wired referred to him as “Saint Marshall” back in 1996, McLuhan today is more talked about than read, but that was probably the case even at the height of his popularity in the 1960s. Northrop Frye, a fellow professor of English and contemporary of McLuhan at the University of Toronto, said in 1988, “McLuhan was celebrated for the wrong reasons in the 1960s and then neglected for the wrong reasons later.”
I don’t pretend to be an expert on McLuhan. Although I read his Understanding Media (1964) as an undergrad, it wasn’t until grad school that I came to develop a greater appreciation and understanding of him. Part of the reason for that was having a professor who was one of only a few people to have written a doctoral dissertation under McLuhan’s supervision. (Many professors warned their students away from McLuhan due either to jealousy or to a perception that he was an academic charlatan.) Part of my interest was simply a result of timing; with the rise of the Web and of globalization through the 1990s, suddenly McLuhan seemed to make more sense to people. And part of it was a kind of national pride in the global influence of a fellow Canadian.
McLuhan’s thought is subtle and complex and has been frequently misunderstood. It’s a risky venture to go into print talking about it – you don’t want to sound like the pompous prof in Annie Hall who, trying to impress his date with his deep knowledge of McLuhan’s concept of hot and cool media, gets everything wrong and then is scolded by the master himself. I happen to have Marshall McLuhan right here...
The central idea at the heart of McLuhan’s work is that, in his own words, “all media, from the phonetic alphabet to the computer, are extensions of man that cause deep and lasting changes in him and transform his environment.” McLuhan defines media broadly – you could in fact replace that term with “technology.” For example, wheels are extensions of our feet, clothes of our skin, the telescope of our eyes, the computer of our central nervous system, and so on. In extending our different senses in different ways, each medium, or technology, changes the balance of our sensorium. “Such an extension,” he said, “is an intensification, an amplification of an organ, sense or function, and whenever it takes place, the central nervous system appears to institute a self-protective numbing of the affected area, insulating and anesthetizing it from conscious awareness of what’s happening to it.” And so we are as unaware of the new environment created by media as is a fish of the water it swims in. (He once said, “I’m not sure who discovered water, but I’m pretty sure it wasn’t a fish.”)
The first great disruption of the human sensorium, according to McLuhan, came with the introduction of the phonetic alphabet, which installed sight at the head of the hierarchy of senses. “Literacy propelled man from the tribe, gave him an eye for an ear and replaced his integral in-depth communal interplay with visual linear values and fragmented consciousness.”
The next great disruption came with the printing press:
If the phonetic alphabet fell like a bombshell on tribal man, the printing press hit him like a 100-megaton H-bomb. The printing press was the ultimate extension of phonetic literacy… Type, the prototype of all machines, ensured the primacy of the visual bias and finally sealed the doom of tribal man. The new medium of linear, uniform, repeatable type reproduced information in unlimited quantities and at hitherto-impossible speeds, thus assuring the eye a position of total predominance in man’s sensorium. As a drastic extension of man, it shaped and transformed his entire environment, psychic and social, and was directly responsible for the rise of such disparate phenomena as nationalism, the Reformation, the assembly line and its offspring, the Industrial Revolution…
The third great disruption came with the introduction of electronic communications technology:
The electric media are the telegraph, radio, films, telephone, computer and television, all of which have not only extended a single sense or function as the old mechanical media did… but have enhanced and externalized our entire central nervous systems, thus transforming all aspects of our social and psychic existence. The use of the electronic media constitutes a break boundary between fragmented Gutenberg man and integral man, just as phonetic literacy was a break boundary between oral-tribal man and visual man.
In the 1960s, with the effects of electrification in general, and of television specifically, so widespread and rapid, he saw it as essential to try to understand them. Many people misunderstood McLuhan to be celebrating a new post-literate electronic age, and all of the social upheaval that came with it. This was not necessarily the case. He was not really celebrating or damning anything – he was simply trying to understand. Similarly, his concept of the "global village” was misunderstood by some as a celebration of a new electronic age in which the world would have a Coke and learn to sing in perfect harmony. But this was not at all what he meant:
“The more you create village conditions, the more discontinuity and division and diversity. The global village absolutely insures maximal disagreement on all points. It never occurred to me that uniformity and tranquility were the properties of the global village … The tribal-global village is far more divisive – full of fighting – than any nationalism ever was. Village is fission, not fusion, in depth … The village is not the place to find ideal peace and harmony. Exact opposite. Nationalism came out of print and provided an extraordinary relief from global village conditions. I don’t approve of the global village. I say we live in it.”
McLuhan saw it as his role as a teacher to make people aware of the environment they are swimming in, the extent to which that environment is created by technology, and the profound effects it has on our biases and modes of thought. If that idea no longer seems as radical or strange as it once did, McLuhan is largely to thank for that.
Reading McLuhan today is as rewarding and fascinating as it ever was. He wasn’t right about everything (I’ve written “BS” in the margins a few times), and some of his ideas seem eccentric, but he was remarkably prescient about many things. For example, if you watch and listen to this RSA Animate video of a talk by Sir Ken Robinson on changing education paradigms, the ideas Robinson presents are very similar in many respects to ideas McLuhan proposed on education throughout the 1960s.
If you have a chance to dip into some McLuhan, you will have your own illuminating moments. A good starting point is an interview McLuhan did with Playboy, of all publications, in March 1969. I’ve quoted from it in this post because it presents an accessible summary of his thinking.
Another good entry point is the collection of essays he wrote with Edward Carpenter in Explorations in Communication: An Anthology, published in 1960 and available online. (You can get a free trial for a day from Questia.
If you really want to immerse yourself, The Gutenberg Galaxy (1962 ) is probably his best work.
Then you can tell me what he would think of smartphones, Facebook, and nanotechnology.
Looking for material for a leadership development program? Working on your own leadership skills? Seeking an alternative to leadership frameworks, checklists, and theories?
Learn Like a Leader will satisfy any of these objectives. It's a collection of of essays from leadership experts edited by Marshall Goldsmith, Beverly Kaye, and Ken Shelton. Each essay is about the writer's own learning experiences and lessons. You can dip in anywhere and get something useful within just a couple minutes. And with 35 different perspectives, it's easy to find some stories to really connect with.
The editors have supplemented the essays in several ways to support the readers' learning:
I compare this book to the kind of cookbook that's a bunch of stories interwoven with a few special recipes. Much more enjoyable than a how-to book, with just enough structure to turn inspiration into action.
I was in China a week ago (for work and play) and I came across a statue that immediately caught my attention. It was of a woman with three heads and four arms. I took a picture in front of it and sent it to a group of professional pals, saying . . . I have the answer to all our problems! There is so much to learn, that three heads are definitely needed . . . and then when you learn just some of what’s needed....four hands would certainly help in the multi-tasking that follows.
Then I thought about all the learning opportunities that we have in our field. Opportunities like the upcoming event in July. Personally, I always make time for events like this in my own professional life because they provide a chance to talk and learn from colleagues as well as from whoever is in front of the room.
As the person “in front of the room” on July 21st, I’ve thought a lot about what might make this a learning event that would be useful for your organizations, and one that would be personally useful as well.
I think that all too often, we in the human capital arena are “cobblers without shoes!” We work so hard to deliver to our customers and clients that we often don’t have time to try our own initiatives on for size! So, as I design the morning, know that I have two goals in mind. First, I want to tell you about my recent thinking about one of my niche areas – career development. I’ll to tell you where my thinking has gone, and how I believe we may need to deliver the message about careers differently, and where it is indeed still the same.
I also have another goal. That is to have you think about your own careers in this fast-changing field of ours, and consider the choices you might have ahead of you. I’ll be picking and choosing from some of the thinking of my colleagues who have looked at careers in our own field, and asking some provocative questions of all of you.
I’ve heard (from the sponsor and from my good friend Richard Leider) that you are a great group who enjoy getting time with your colleagues as well as time with a speaker. I will try to honor both.
I am on the road pretty continually (making up for three weeks away) till I see you in July. If you have questions, comments or thoughts…about the wide subject of careers in the ever changing world of ours and how to develop talent….let me know. I may not be able to answer, but I will definitely ponder your ideas.
See you soon.
One of the most important activities in life in general--and certainly in business--is building and keeping a network of friends and professional colleagues. Your network will serve you well in every area of your life. At Fredrickson Communications, we believe in the importance of networking so much that we've founded and continue to sponsor groups like the Fredrickson Roundtable for Learning Leaders. We have been building this network for nearly two decades and we continually work to expand the group throughout the business community in the Twin Cities.
While JIT (just in time) delivery can work well in training situations, it's definitely not a technique that works for networking. Don’t wait until you need a network to start building one.
Here are a few tips I'd like to offer you for successful networking:
I am continually shocked at how many bright, creative people don’t make time for this important task. It may seem as if I am writing to people who are new to the business world—I’m not. I’ve coached a number of high level executives (when they have lost their jobs) who thought they didn’t have time to build a network while they were working. Wrong! It’s important to make time and to learn how to build this into a natural part of your week.
Don’t know where to start? Give your networking efforts a boost by asking yourself this: Where do my peers gather in a group? If you don't know the answer, you have your first networking homework assignment: Find out!
For example, Fredrickson's Learning Leadership Summit on July 21 will attract over 125 leadership-level learning professionals. If you are a manager, director, VP or other leader in the Twin Cities' learning community, this is where your peers will be gathering. Not only will they have an opportunity to hear Beverly Kaye, but you’ll have a morning to spend with a hundred of their colleagues. Surely each and every one of them will have the opportunity to meet several new colleagues that they would like to have lunch with.
And that’s how networks are built—one person at a time.
In my eight years with Fredrickson, usability (a.k.a. user experience testing) has always held a special interest for me. It seems like such a “no-brainer” to me to spend a very minimal amount of time and budget – relative to the overall size and budget of many projects – to make darn sure that the users of the site/system/application will be able to do what they need to do or find what they need to find. When labels or language or the organization of information, etc. aren’t quite right, the time and money wasted via user frustration, lost customers, complaints around the water cooler, help desk time, etc. (the list goes on!) is a far bigger issue.
Our Fredrickson Intersect meeting on May 10 at the City of Roseville was dedicated to showing the process of usability testing in action. Our Director of Usability Services, John Wooden, came prepared with both a recording of a previous usability testing session and scenarios to do live testing with volunteers on a member’s website. It was quite the eye-opener for many who have not experienced it before. One member said, “I really liked last week’s session. I haven’t had any experience with usability testing (I’m ashamed to say) so I learned a lot. It was particularly interesting to watch the live volunteers.”
As is the case with any project or experience, when we’re “heads-down in it,” we don’t always see the reality of it as others might. Usability testing provides the irrefutable truth – real data – around whether something is really working for the users. Another member said, “I thought the presentation was GREAT. The live demo really illustrated how usability testing with real users doing real tasks sheds light on issues/problems site owners aren’t aware of. I kept wishing there were more developers from [our organization] there to see how important it is to get user feedback early and often.”
So, in my humble but somewhat experienced opinion, take Nike’s advice and Just Do It!
Here are a few resources where you can learn more:
As always, if you’d like us to bring an eye-opening presentation like this one into your organization, please .
Introduction: Raj Alphonse is a Fredrickson Communications' affiliate specializing in learning technology consulting. This blog entry is a lead-in to the April 14, 2011 meeting of the Fredrickson Roundtable for Learning Leaders where the featured discussion topic will be "The LMS Wishlist."
Have you run into a brick wall lately? I felt like I did when I saw an article in the March issue of the CLO magazine titled, “Assessing Learning in a Post-LMS World.”
Did I read that right? Post-LMS? Is the LMS dead? The world stopped for a moment, then I felt dizzy. In disbelief I asked Google, “Is the LMS dead?” and got 16,900 results, including an article titled “Is the LMS Dead?” from CLO magazine dated September 26, 2010.
In six months CLO Magazine has gone from pondering if the LMS was dead to a dissertation on a post-LMS world. The authors assure us that a “post-LMS world ... merely means that assessing learning only utilizing an LMS is becoming obsolete.” Sad, but true, there is ample evidence to support this notion. Just spell L-M-S out loudly to a gathering of training professionals and watch the reaction. No matter what LMS they use, everyone will have at least one gripe, one horror story, one wish. Summarize the feedback, and you can see the writing on the wall: the LMS badly needs to evolve.
Can this lumbering beast get its groove back? What can we do to make the feedback heard by those who can do something? Create a blog of gripes, a book of horror stories? That would be too negative.
Instead, how about we compile an LMS Wishlist and send it to the Beast Makers? A list that spells out what you want. And what you don’t want. A list that puts the spotlight on the gaps, goof-ups, and glaring omissions. A list that points to features no one asked for. A list to transform the “monolith” mindset of LMS designers into a “modular” mindset. A list to upgrade the evolutionary effort to a revolutionary one.
There is another benefit to drawing up a collective LMS wishlist: we can learn what everyone else wants, needs, likes, and dislikes. This is the beginning of a conversation about the future of the LMS because it shows us where learning professionals want to go and what they want to leave behind.
So please make a wish and make it known. This blog entry is just for that.
All wishes are welcome; no wish is too small, too large, nor too far-fetched. There are no limits to how many wishes you can have. Wishes may be related to operation, budget, technology, infrastructure, user interface, reporting, and whatever else you have.
And hurry please, before CLO Magazine starts thinking about an LMS autopsy report.
I have more information and links to share after the last part of our Intersect group's three-meeting series about accessibility on February 8:
Full CART services transcript of the meeting: Intersect February 2011 Transcript (.txt document).
PowerPoint Presentation used by Tanya Belanger at the February Intersect meeting. (.zip format for download)
Information about the Minnesota Science Museum of Minnesota's class on PDF Accessibility. This course is Part 4 of their Acrobat training series.
Information about EASI’s webinar series on accessible PDFs.
For more useful links, see my previous blog entry on this subject. Once again, our thanks to Tanya Belanger from Minnesota's Office of Enterprise Technology for all her contributions to this valuable Intersect discussion series.
Last week I saw a demo of 3M’s Visual Attention Service (VAS), a web-based application that applies an algorithm to predict with greater than 85% accuracy what users will focus on in the first 3-5 seconds of exposure to images. The predictions are based on 3M’s 30 years of research into the science of vision and take into account factors such as color, contrast, edges, size, and the presence of human faces.
If you design in-store displays, outdoor signage, or websites, VAS is a useful tool to add to your kit. Say you have put together three alternative design mockups for a website home page, and the home page needs to convey three key messages. Because website visitors tend to be impatient and make decisions very quickly, it’s important to know what they are likely to notice within 3 to 5 seconds. With VAS you can highlight three key areas of each home page design alternative and determine how successful each one is in quickly attracting user attention. You can then tweak your design and see what difference that makes.
In the examples below from Target’s home page (image taken on February 8), you can see what VAS predicts users will quickly notice.
So how is VAS different from eye tracking software?
The big difference is that eye tracking requires human subjects to determine what people actually look at. VAS makes a prediction based on an algorithm. So the main benefits relative to eye tracking are speed and low cost. (VAS is not free though. You will likely want to buy credits if you are going to be an occasional user or a subscription if you plan to use it frequently.)
The limitations of VAS are pretty obvious. It tells you what people are likely to focus on in the first 3 to 5 seconds, and that’s all. It does not tell you whether users will like and respond to those messages or images, whether users will stay on your site, or whether your site is easy to use. It doesn’t give you a lot of insight into how your users will think or behave. For that, usability testing is still the best option. In this way, VAS is a useful tool in the same way that spellcheckers and color contrast analyzers are useful tools. It has real value during the design process, but it can only tell you so much. It doesn’t replace getting feedback directly from real users.
Let me give you one quick example that helps illustrate this point. Humans are wired to focus on other human faces – we are drawn to look at them. (Think of LinkedIn profiles – you’re more likely to look at the ones with photos.) In a similar way, we are attracted to look at anything red. The example from Target demonstrates this well. But using red or including photos with faces is no guarantee of sustained attention and interest. I ran the home page of another site that we usability tested recently (which I can’t show you) through VAS and it predicted what I expected – that the two parts of the page with images of faces would be more likely to attract attention in the first few seconds than the other parts of the page. But regardless of whether our usability test participants noticed these images in the first few seconds, their comments and behavior indicated that they were not very interested in the sections associated with these images.
Despite our inherent interest in human faces and attraction to red, it goes without saying that the answer to drawing a user’s attention is not always going to be to display a photo of a woman in red. Users are sophisticated, and though they might initially notice certain images, if they associate them with advertising, or fluff, or if they consider them to be obvious stock photos, they are likely to actively avoid them. Large text, subtle color, and good contrast can also effectively attract attention and convey key messages – it doesn’t always have to be a face photo.
Give VAS a try and see what you think. Just bear in mind what precisely it is telling you, and what it isn’t telling you.
We've posted a new article over on the Articles page: Learning Trends - Where will they lead in 2011? If you haven't already, head over there and take a look.
And now give us your comments. Or even post a prediction of your own and let us see where you think the learning and development community is headed this year.
Over to you.
Author’s Note: This blog entry is part of a series I started to explore two of today’s most popular eLearning rapid development tools: Articulate Studio and Adobe Captivate. Here is a link to an article that contains the whole Articulate vs. Captivate series.
In the first entry of this series, I started a series to explore two of today’s most popular eLearning rapid development tools--Articulate Studio and Adobe Captivate. Now I’d like to talk about each of them separately and in more detail, starting with Articulate Studio. In the process, I’ll also discuss some of the best practices that may help with your development.
Just in case you’re new to Articulate Studio, I want to mention that there are four main components: Articulate Presenter, Engage, QuizMaker, and Video Encoder. If you need info or a refresher on what each component does, have a look at Articulate’s website.
Let me start by asking you a simple question: What is Articulate Studio?
The answer I most often hear goes something like this: “Articulate converts PowerPoint to a Flash presentation.” Technically, this is a true statement and it’s one of the factors that attracts many people to Articulate in the first place—it doesn’t require much in the way of programming skills to jump on board. Although using Engage and QuizMaker requires more practice, most users can get familiar with these Articulate Studio components in a short period of time.
For those shopping for rapid eLearning development capabilities, it can seem as if all you need to develop a good course is PowerPoint content to run through Articulate and out comes eLearning. This is an especially attractive proposition for those who are tasked with “converting” instructor-led training courses to be delivered as eLearning.
The problem that I hear over and over from both eLearning developers and actual learners is that the “PowerPoint look” of Articulate courses wears thin very quickly. Something’s missing, but what?
To answer this question, I have to stray a little from talking about tools and take a quick dive into instructional design. As you probably know, the traditional use of PowerPoint is in classroom-based training, which is also called synchronous or instructor-led learning. By contrast, Articulate eLearning courses are, of course, an asynchronous (self-paced) learning experience.
You probably see where I’m headed already: even if the course contains the same content, we have to take quite different approaches once the delivery medium changes. To substitute for the richness of activities and interactions that can take place in the classroom, we need to build a new layer of richer interaction and engagement on top of the content in the PowerPoint in order to make it effective as an eLearning course. When this layer is missing, people see the course as a shallow PowerPoint presentation, not as real learning.
I know that this problem is not just an Articulate Studio problem, but because of Articulate’s direct link to PowerPoint, it seems even easier for Articulate users to fall into this trap. Remember, a PowerPoint presentation is only one ingredient. One ingredient doesn’t make a cake.
Fortunately, Articulate Studio gives plenty of options to produce a richer eLearning course that goes beyond PowerPoint. For example, Engage interactions, quiz questions, Flash movies, and even customized Flash games. In addition, Articulate allows you to deliver your content through branched scenarios, which is another effective tool to keep learners’ attention.
Articulate Studio offers a lot of eLearning potential in one package. I’m not going to do a feature-by-feature list here--you can easily get that information elsewhere. Instead, I’d like to highlight just a few of features that I think are significant and either little-known or not often used to their potential:
After this discussion of my favorite features, I feel I have to deliver a brief word of warning. I’ve been using Articulate for about 7 years now and the product has evolved significantly. Many people used to see Articulate as a simple tool that would enable anyone to develop eLearning. This may or may not have ever been true, but what has happened over time is that eLearning developers and instructional designers have demanded more and more sophistication. And Articulate has largely delivered, but this means that to get the most out of Articulate, you have to be more and more skilled as a developer to take advantage of the richer features. Therefore, I think it’s best to look at Articulate as a “development suite” and the results really are closely linked to the developer’s skill and the instructional designer's understanding of how to design learning to take advantage of Articulate’s strengths.
Since most of the Articulate courses involve an audio presentation with closed caption text, it requires a different design approach in PowerPoint. Research indicates that when audio and static text are presented at the same time, audio is the most dominant and efficient channel. Therefore, it’s often a distraction if the bulleted text repeats the audio. In many cases, it’s more effective to replace bulleted text with graphical elements like photos, illustrations, and flowcharts, and animations.1
In my previous blog entry, we talked briefly about software training. Can I use Articulate to develop this training by itself? Again, it depends how and what you want to achieve in the training. If the training only involves demonstration, you can insert a series of screenshots on the PowerPoint slides, and then spice them up with the annotation tool in Articulate. Gerry Wasiluk posted some excellent information on this topic as comments to my first Articulate vs. Captivate blog entry.
Or, you may opt to use one of the screencasts tools, for example, the Screenr. With these tools, you can easily export your screencasts to video clips, and then insert it into your Articulate course later. However, if you want to drop in a comprehensive simulation in your course, I would say that Articulate is not your best option. If software simulation and is your goal, you should consider Captivate, which I will cover in the next entry in this series.
1 Of course, a transcript should be available so that learning content can be accessed by those who cannot hear the narration.
Author's Note: This blog entry was the beginning of a series of a series I started to explore two of today’s most popular eLearning rapid development tools: Articulate Studio and Adobe Captivate. Here is a link to Part 3 of this series.
The State of Minnesota is in the midst of implementing standards to make technology accessible for those who have hearing and/or vision impairment. This is an important and massive mission, and will take many years to get to a place where true accessibility is more “the norm” than not. But every baby step counts, especially to those who rely on assistive devices or other alternate ways to access information via technology.
On Tuesday November 9, the Fredrickson Intersect group helped facilitate training about this initiative by having Tanya Belanger of the Minnesota Office of Enterprise Technology present on how to make documents and presentations accessible. This was the second session of our three-part series about this specific undertaking.
We promised to make downloads of Tanya’s presentation, as well as the Social Security Administration’s Guide to Producing accessible Word and PDF Documents – a valuable resource which Tanya discussed in our session – available, so here are the links:
The Social Security Administration's Guide to Producing Accessible Word and PDF Documents (Microsoft Word document)
PowerPoint Presentation used by Tanya Belanger at the November 2010 Intersect meeting (.zip format for download)
Jed Becher from the DNR also sent this great document about improving accessibility with Adobe's InDesign: Creating Accessible PDF Documents with Adobe InDesign CS4 (.pdf document)
Finally, here's the full CART services transcript of the meeting: Intersect November 2010 Transcript (.txt document)
Many of us have had at least one frustrating experience with an interactive voice response (IVR) system – getting lost in a maze of menu options, never hearing an appropriate option, never being offered an option to speak to a customer service representative, arriving at a dead end, getting cut off during a transfer to a representative, and so on. My father ended up shouting at an IVR system with voice recognition because it kept saying, “I’m sorry, I did not hear you. Please choose from the following options …” Eventually, he just hung up.
IVR system usability has not received nearly as much attention as Web usability, and perhaps it’s no surprise that over the years IVR systems have collectively developed a bad reputation. This doesn’t mean there are no good ones, but if you ask people, most will tell you they don’t have a favorable impression of them. Instead of being perceived as useful tools for self-service, they are commonly thought to be obstacles deliberately placed between customers and a live human in an organization’s customer service department. In 2005, Paul English was frustrated enough with his IVR system experience to publish “The IVR Cheat Sheet,” which listed the codes that would allow a caller to speak directly with a representative in dozens of companies.
If an organization’s primary objective in having an IVR system is truly not to block customers from speaking to an agent or representative, but rather to try to provide a good automated self-service experience, then it needs to take IVR usability seriously, just as seriously as it takes the usability of its websites and applications.
In most cases, this means conducting usability tests. The methodology for usability testing IVR systems and websites is essentially the same in most respects. You need representative users from your main user groups, a list of task scenarios and key questions to ask, a quiet place to test, and for IVR testing, a phone with a speaker. (You can test an IVR script before it is recorded simply by having the facilitator read prompts and asking the tester to describe which options they would select.) We also use Techsmith’s usability testing software to record the calls (with tester permission) and to capture tester actions and feedback. By observing testers and listening to their questions and comments, usability analysts can learn what is working well in an IVR system and what needs improvement.
For example, in a recent test of a government IVR system, the overall feedback we heard was positive. This was already a relatively straightforward system to begin with – most of the prompts involved a simple binary choice: 1 for yes, 2 for no. Still, the test revealed dead ends in the menu, some common misunderstandings of prompts and transitions, and issues with the password process. And so now after testing, this organization is able to make their system even better and thereby reduce the number of callers who want or need to speak with a representative.
Of course, the major design constraint of any IVR system is that it is primarily an auditory medium. IVR systems require users to listen – often closely – and each option must be presented sequentially, which places a load on the user’s working memory. In contrast, the web is primarily a visual medium that can use layout, color, font size, text, and images to organize and present information. And links can offer a user tremendous navigational control and flexibility. The ability to see information and control pacing and progress are the key reasons why more users prefer doing self-service online than through an IVR system.
Still, even within the constraints of an IVR system, it’s possible to provide a good experience by following some important guidelines:
Note: Many lists of IVR system heuristics provide similar guidelines. One especially useful source that I consulted for this entry was Bernhard Suhm's article, "IVR Usability Engineering Using Guidelines and Analyses of End-to-End Calls," in Gardner-Bonneau and Blanchard's Human Factors and Voice Interactive Systems, 2008.
With the rapid eLearning development tools becoming prevalent in the market, course development is getting faster and some aspects are getting easier and less costly. Among the many eLearning rapid development tools on the market, Articulate Studio and Adobe Captivate have become the most popular and widely-used among our clients.
As an eLearning consulting company, we are often asked for advice on which is best, Articulate or Captivate? This question is often asked by corporate learning groups who want to choose a standard tool for use within their company or group.
I want to note here that when I refer to “Articulate” in these blog entries, I’m referring to the full Articulate Studio package. While it is possible to buy individual Articulate products (like Articulate Presenter), I don’t think this makes sense for most needs because without the full Articulate Studio, the functionality and results would be limited.
So which is better, Articulate or Captivate? Of course, there’s no clear way to answer this question except to say “it depends”. Both tools work well in different areas and for different reasons. I’ll start this series of blog entries with the things that both Articulate and Captivate have in common. In upcoming entries, I’ll look at what each tool does well and not-so-well.
I have to add that the skill and experience of the developer does still matter. These tools are often purchased with the expectation that anyone will be able to use them to create great eLearning courses. The problem is that as developers and learners have demanded more sophistication from the courses that these tools produce, the number of features and the complexity of using these tools has increased with each new version. Whichever tool you choose, there is no substitute for knowing how to use it efficiently and effectively. The more skilled and experienced you are at using these tools, the better your results will be.
Since I’m a developer, I can’t resist starting with ease-of-development. From this standpoint, both tools are relatively easy to jump into (at least at a basic level) without extensive coding knowledge or formal training. Basically, developers use the built-in templates to build courses by adding written learning content, creating interactive components, and then adding audio, and so forth. The templates take care of the user interface, the navigation, and other features so these don’t have to be built from scratch as they would if you were developing using other technologies like Adobe Flash.
Both Articulate and Captivate have a number of features in common:
Now we come to the point where the tools start to diverge. Articulate and Captivate work differently and each tool has advantages and disadvantages when it comes to certain features and uses. To understand which tool is a better choice, you need to consider the tools in light of you or your organization’s needs, and the types of training you develop or intend to develop. You also need to consider the developer skills you possess or, in the case of a corporate learning group, the skills you have available on your team.
In the following entries, I’ll walk through what I think are the key functions of each tool, the types of training that I think they work best for, and finally I’ll give some thoughts about developer skills, publishing and deployment concerns, and other considerations.
Author's Note: This blog entry was the beginning of a series of a series I started to explore two of today’s most popular eLearning rapid development tools: Articulate Studio and Adobe Captivate. Here is a link to Part 2 of this series.
We have some video highlights from this summer's fifth annual Learning Leadership Summit. Our theme was The Power of Purpose for Learning Leaders and our featured speaker was none other than Richard Leider, the international bestselling author of The Power of Purpose.
At Fredrickson Communications, we recognize that leading a learning organization is a unique challenge. We sponsor the Learning Leadership Summit to help this special brand of leader to grow and prosper, both personally and professionally.
The Summit was a huge success again this year and we had over 100 learning leaders in attendance. Richard was a fantastic, thought-provoking speaker and, as always, the Summit is the best networking opportunity in the Midwest for learning leaders.
Are you in a leadership role in a learning organization in the Twin Cities or surrounding areas of Minnesota and Wisconsin? If you'd like to be added to the invitation list for the 2011 Learning Leadership Summit, just .
Minnesota Public Radio recently aired an interview with Matthew Crawford, author of the bestselling book Shop Class as Soulcraft: An Inquiry Into the Value of Work. I have read this book and I think Crawford provides learning professionals with a lot to think about.
It’s not possible (for me, at least) to reduce Shop Class to a simple “here’s the point” statement. The book is an exploration of how our work, and our relationship to our possessions contributes (or fails to contribute, as is more often the case) to our sense of fulfillment as people. Along the way, Crawford touches on many other issues related to education, society, and the workplace.
The book does discuss some of what the title most directly implies: our societal view of the so-called “manual trades” and the related decline in the promotion and teaching of the trades as valid and secure ways to make a decent living. Crawford also does a very good job of challenging some of the myths of modern work, for example that there’s no thinking involved in so-called “manual” trades.
One of the main concepts of the book is Crawford’s exploration of individual agency, which is an ability to observe firsthand the effects of one’s actions on the world.
As we’ve marched toward becoming an “information society” of so-called “knowledge workers” our individual agency has rapidly declined. As knowledge workers, our jobs have become largely about doing a piece of a piece of a piece of a part of the whole. In other words, many of us today do work that is largely devoid of individual agency.
This represents an almost total reversal in the millennial-long trend of our development as a species, where we constantly increased both our technology and our individual agency. We used tools of growing sophistication and saw firsthand the product of our labors with these tools. Now that trend seems to be reversing and our relationship to our material possessions and tools is also changing from a position of master to that of servant.
Or let’s be honest, if we’re talking about any device with a power cord, we’re basically slaves.
If the tools of our “information society” fail to work, we’re helpless. If the object is even meant to be repaired at all (a very big “if” these days), our only option is to call a repair professional, or trudge to the dealership or the repair shop (if such an option even exists!) and implore the tradesperson to please, please fix it. Our relationship to our possessions has devolved and in many cases we've become more helpless bystander than owner. The start of a reasoned case for the value of the manual trades, perhaps? Read the book!
Shop Class doesn’t really offer solutions, but it provides plenty by way of perspective for HRD professionals. From the intellectual challenges of manual work to an exploration of why concepts like individual agency are so important to our sense of job satisfaction and fulfillment, there’s plenty that HRD professionals can take from this MPR interview and from the book.
Here’s the interview on MPR:
And here’s the book (just out in paperback) on Amazon.
Today at our annual Learning Leadership Summit, a person at my table brought up Daniel Pink's book Drive: The Surprising Truth About What Motivates Us (2009), saying how much she enjoyed it. A couple others of us concurred, and then we tried to name the three things that Pink says are of most importance to motivation for today's workforce. We remembered the first two things--mastery and autonomy--but none of us could think of the third. After looking it up, we had to laugh at the irony. The third item is purpose, and here we were participating in a session on that very subject.
Talking about purpose is not a fad. As the coincidence between Pink's work and Leider's work shows, there is a growing body of research that identifies a link between having a purpose and being happy and healthy. And of course, there are countless examples of the difference in success between companies and projects with clear purposes vs. those without. Finally, we all know the mantra about effective communication: purpose, audience, and scope.
Check out more on the subject at these resources:
In their well-known test of selective attention, psychologists Christopher Chabris and Daniel Simons asked test subjects to watch a short video in which two teams, one in black shirts, one in white shirts, move around and pass a basketball to one other. The subjects were asked to keep silent count of the number of passes made by the team wearing white.
About 25 seconds into the video, someone wearing a gorilla suit strolls into the middle of the passing game, beats their chest, and then strolls out again. The gorilla is on screen for nine seconds. The correct answer to the question about the number of passes is 15. But the real question was “did you see the gorilla?” About half of the test subjects did not.
Watch the video.
In their new book, The Invisible Gorilla and Other Ways Our Intuitions Deceive Us, Chabris and Simon discuss the “illusion of attention” and our lack of awareness about the limitations of our perceptions, memories, abilities, and knowledge. (For example, more than 63% of Americans think they are more intelligent than the average American.) Their gorilla experiment demonstrates that when we focus our attention on one object or action, we can easily miss anything else going on around it. Other experiments found that people who missed seeing the gorilla had their eyes on it, but they didn’t see it because it was not what they were looking for.
Our tendency toward selective attention has implications for web interfaces. If you have ever participated as an observer of a usability test, you may have had the experience of watching a test participant fail to see what you perceive to be an obvious link, or button, or some other interface element. Seconds, or minutes, go by while the participant struggles, and you’re thinking, “Why in the name of all that’s good and decent can they not see that link? It’s RIGHT THERE! CLICK IT!!”
Users may miss a link or some other element for all kinds of reasons – lack of white space, crowding, small font, sub-optimal contrast – but one common reason is that the link (or button, or whatever) was like the gorilla in the experiment: users didn’t see it because it was not what they were looking for. In many instances, they were looking for other words – the words actually used didn’t match their expectations. They were looking closely for X and therefore didn’t see Y. So much of usability has to do with language, with using the keywords that match what your users have in mind. And even subtle differences can adversely affect a user’s ability to find what they are looking for.
Another common reason why users miss seeing what might seem blindingly obvious to a development team is that usage convention has led the user to expect one interface element when another has been used instead, such as a link instead of a command button.
Convention also leads us to expect certain elements in particular places – primary navigation at the top or on the left, a search box in the upper right, contact information in the footer, ads or other "fluff" on the right, and so on. If a user misses an interface element, it may be because it was not in the position they expected it to be, and therefore they simply didn’t see it.
One of the great benefits of usability testing is that it helps us to understand what users were actually looking for and what they expected – where was their selective attention focused? This in turn can help us design more effectively.
What do you think? Are there other implications for selective attention?
Many of our clients are Learning & Development departments in multinational companies. Anecdotally, it seems that most recognize the need to adapt learning and development methods to audiences in different cultures, yet lack the capacity and sometimes the strategy to make such adaptations. A good place to start for learning leaders, instructional designers, and trainers might be self-education. We're compiling a reading list of books and articles about global training and cultural awareness; we'll share it on our website once we've read enough of the items to make a decent list. I'd like to recommend two books. I've read one and I've read the table of contents of the other (the 7-page TOC was enough to ascertain that the book was relevant!)
Cultural Intelligence: Living and Working Globally, 2nd Ed., by David C. Thomas and Kerr Inkson. This book neatly presents a research-based framework and provides loads of examples of workplace situations where people operating from their cultural contexts misunderstood each other and missed or misinterpreted cues. The book is organized in a way that makes it useful as a field guide--there are chapters on cross-cultural decision-making, leading, negotiating, and teamwork. Even if you just page through to read the real-life situations, you will be enlightened.
Cultures and Organizations, 3rd Ed. (just released!), by Geert Hofstede, Gert Jan Hofstede, and Michael Minkov. Geert Hoftstede is one of the well-regarded researchers and authors on the subject of society-based culture. As with Cultural Intelligence, there are numerous ways to use this book besides reading it cover-to-cover. For example, there are tables that any tech writer would admire that contrast key differences in behavior between societies that are weak or strong in a particular cultural dimension, such as individualism (e.g., the US) versus collectivism (e.g., Japan). Here are just a few ways that I think these books are useful to the L&D profession:
I'd love to add your recommendations to our list. You can find me on LinkedIn.
I wrote in the latest Fredrickson eZine about a radio interview featuring Geek Squad founder Robert Stephens. The interview itself is interesting, but it started me thinking about Stephens’ observations about smartphones in terms of how this technology will eventually change workplace learning and development.
Here’s the interview, courtesy of Minnesota Public Radio:
If you can’t see the embedded audio player on this page, here’s the interview on MPR’s website.
I’m interested in your comments. What do you think of the arrival of the smartphone age? When and how will this technology change learning in the workplace?
Earlier this week, I attended a presentation by Robert Stephens, founder of the Geek Squad. He was speaking to the Minnesota Chapter of the Entrepreneurs Organization. I was excited to hear him because I’d also heard him on Minnesota Public Radio a couple of weeks ago, and one of his comments really stuck with me--essentially that curiosity is increasingly more important than expertise in many jobs.
Over the 20 years that I’ve been interviewing people for possible employment at Fredrickson, I’ve always listened for evidence of curiosity. That tells me two things: that the person is likely a life-long learner and that he or she is resourceful and will readily look up information at the time of need.
Stephens’ point about curiosity is timely as we watch many corporate learning organizations shift from a focus on “push” methods such as courses to “pull” methods such as wikis. Being curious and resourceful will help people use these self-service tools effectively. And having curious and resourceful people in a company will be essential to that company’s ability to build and benefit from an enterprise social network.
See www.robertstephens.com for more on what he presented to EO Minnesota and on MPR.
Measuring the effectiveness of training is a continual challenge. Many questions about measurement strategies exist, especially around how to accurately measure the business impact of training. Measuring the business impact usually involves measuring the changes that occur at what Kirkpatrick’s evaluation model classifies as levels three (behavior change) and four (business results).
Here’s an example of a level three measurement strategy and how I believe it contributed to a successful business initiative:
I was the manager in charge of training for a major SAP implementation. My team developed level three evaluation checklists for many processes and tasks that were, in turn, aligned to specific business goals.
For example, one of our business goals was decreasing the time it took to complete the month-end close process. The steps of the month-end close process and who was responsible and accountable for each step were documented and each step was aligned to the business goal of decreasing the time it took to complete the month-end close process.
By providing clear ownership and the tools to measure the individual steps of the process, we were able to confirm that each step was being completed accurately after implementation. In other words, we had achieved a behavior change, which is what Kirkpatrick classifies as a level three measurement of the effectiveness of training.
The result was that the month-end close process was reduced from 21 days to 5 days. Obviously, the learning solution did not cause this reduction by itself, but nobody questioned the value of the learning component’s role in the initiative. I believe that by providing the tools for the measurement and creating the measurement framework for level three, we actually helped drive level four results—a direct impact on the business.
I’ve also learned that level three measurements provide an excellent avenue for encouraging on-the-job follow-up by supervisors and others accountable for business processes, skills, and tasks.
I look forward to hearing your comments and thoughts, both here on the blog and at my ASTD presentation on February 19.
In the world of web application and site design, there’s been a trend over the last several years toward more multi-faceted “user experience designer” roles. According to the job descriptions, these people are expected to do it all: user research and task analysis, information architecture, interface design, graphic design, programming, usability testing and evaluation, project management, business strategy, presentations, and so on.
Though there’s reason to be skeptical that many of us can be truly outstanding in all of these skills and practice areas, let’s just say that Acme Design Agency does indeed have such people. Even in that case, is it really a good idea to have the same person, or team, involved in creating a design and then evaluating the usability (or user experience) of that design?
Teams can have the best intentions, but it’s tough to look in a hard, neutral, objective way at your own work. In the same way that good writers need good editors, user interface designers need an objective, unbiased evaluation of their products. That’s why it’s important to go outside the design team – and in some cases outside the company – and have a usability analyst conduct usability testing with representative end users, or at least do a heuristic evaluation.
Otherwise, there’s a temptation to be defensive, to look for validation and reasons to keep the work that’s been done, instead of trying to uncover flaws or weaknesses in presentation, navigation, interaction, or content. Design teams are a bit like parents – they aren’t likely to call their baby ugly.* They’re just not objective. (And in the case of parents and children, that’s usually a good thing!)
I’ve had to call a few UI design babies ugly over the years – well, in so many words – and though this has sometimes stung the designers or developers involved, it’s always helped them create better interfaces. And that’s the goal we’re all aiming for.
*Thanks to our client Kathy Bohlke, UI/U Manager at 3M, for making this analogy.
At the last meeting of the Fredrickson Roundtable for Learning Leaders, our discussion topic was social learning. In the course of the discussion, several books were recommended. I’m sorry, I don’t know who recommend each book, but here they are:
Enterprise 2.0: New Collaborative Tools for Your Organization’s Toughest Challenges
by Andrew McAfee
Harvard Business School Press, 2009
Leaders Make the Future: Ten New Leadership Skills for an Uncertain World
by Bob Johansen
Berrett-Koehler Publishers, 2009
There may have been more, so if you have additional recommendations, please mention them in a comment.
Dan Pink’s new book, Drive: The Surprising Truth About What Motivates Us (2009), in many ways picks up where he left off in his 2005 bestseller, A Whole New Mind: Why Right-Brainers Will Rule the Future.
In his earlier work, he uses the left and right hemispheres of the brain as a metaphor for describing the rising importance of a new type of work that is more creative, non-routine, and empathic. While work that depends on logical, linear, left-brain thinking can be easily automated and outsourced, work that depends on creative, non-linear, right-brain thinking will be more valued and less easily commoditized. Think MFA, not MBA.
In Drive, Pink again looks at the way the economy and the nature of work have changed and makes the case that business is out of step with what really motivates us. This time he uses the metaphor of the operating system:
In the context of managing people, autonomy is allowing employees the independence to determine the best way to meet established goals and targets. For example, as long as employees meet their goals ethically and responsibly, then it doesn’t matter if they show up at 9:00am or noon, or if they work from home. It means not standing over them with a carrot or stick. (And let’s face it, many managers find sticks easier to use than carrots.) Pink cites the positive example of the Results-Only Work Environment (ROWE), first used at Best Buy. (Hennepin County recently adopted the ROWE model.)
Complete mastery is a goal we will never reach, but when we work on something we really enjoy, the work of achieving mastery is not so tedious. Ideally, the boundary between work and play dissolves and we enter what Mihaly Csikszentmihalyi described as “flow.” All of us should aim to find work that allows us to enter this flow state, and employers should create environments where this flow state is encouraged, where constant learning, constant opportunities to improve, are readily available.
Purpose is all about doing work that matters to us. As Pink says, Motivation 2.0 “doesn’t recognize purpose as a motivator.” Instead, it’s “relegated to the status of ornament – a nice accessory if you want it, so long as it doesn’t get in the way of the important stuff.” But the desire for a meaningful purpose is central to what makes us human. Sure, extrinsic rewards are enough for some people, but for many of us, contributing to a larger purpose is crucial.
Pink cites numerous examples in Drive to show that allowing these values into the workplace does not mean sacrificing performance. In fact, it’s just the opposite. Embracing these values enables great performance.
Beyond the issue of fair compensation, the absence of these values from the workplace is the central reason why so many people are desperate to leave their cubicles and find something else. At the risk of sounding like I am sucking up to my employers, the presence of these values at Fredrickson is a key reason why I’ve been here for almost ten years.
But a big reason why I am interested in Pink’s argument is the way it connects so well with the case for social learning and enterprise 2.0, because it seems to me these related technologies and practices are very much in harmony with the idea of Motivation 3.0. For example, making social learning possible, allowing employees to learn from and contribute to enterprise blogs, wikis, forums, and more, putting tools and practices in place that improve knowledge flow – these are clearly related to the values of autonomy and mastery. (It’s no surprise that Best Buy – where ROWE began – is also a leader in enterprise 2.0 and social learning.)
So I’ll leave you with some questions:
What do you think?
Watch Dan Pink’s TED presentation on Drive from July 2009: http://bit.ly/DV7xg.
Read Andrew McAfee on Enterprise 2.0: http://andrewmcafee.org/
Read Harold Jarche on Social Learning: http://www.jarche.com/
Almost twenty years ago, I wrote an article for STC about estimating work effort for creating user guides and online help systems. The article, “Stop Guesstimating, Start Estimating,” provided metrics for various types of content and tasks. I still get occasional notes from technical writers who say they’ve used the metrics successfully for many years.
I’m also asked these questions:
The answers to both question is “yes—as a starting point.”
The metrics represent hours of information gathering, writing, and revising that it takes to finish a countable unit, such as a page, a help topic, a glossary definition. But the process used to create the finished units can significantly affect the work effort. If you have a consistent process, then your metrics are likely to be very reliable.
However, if every project you do involves a different set of people and a different process, using historical metrics alone may lead to under-estimating the project.
As we’ve gathered metrics from projects in the past year, we have seen a variability of 50% in the work effort to create a page of content. It wasn’t that one course had more interactivity, or that there was more content on an average page in one course versus the other. It all came down to process for creating the content. Two recent projects illustrate this variability.
For one project, about 80 percent of the content was known and agreed on at the beginning of the project. The writer could use traditional methods of gathering information from subject matter experts and existing materials. She was then able to create a draft of the course content with the typical number of open issues that could be resolved during the review process.
For the second project, only about 10 percent of the content was known and agreed on when the writer began the project. As a result, the process she used to create the content was vastly different from that of the first project. She facilitated sessions in which the subject matter experts discussed what the policies should be. She then wrote the policies and identified areas that the group hadn’t yet addressed. When the subject matter experts saw the results of their work in writing, they re-thought some of the decisions they’d made. The writer revised the draft accordingly. This continued for about three review cycles.
The difference in work effort? A finished page in the first course took our typical metric of 2.5 hours a page. A finished page in the second course took 5 hours to create.
A collaborative and iterative process, as followed in the second project, is becoming more prevalent as change and work speed up in corporate life. The trick is to recognize before the project begins that such a process will--or could--occur so that you can estimate accordingly.
The presenters in our last Intersect meeting on 11.17.09 had some valuable stories to tell about their respective intranet design and development projects. We thought it would be useful to share tips from the presentations, supplemented by some of our own thoughts.
Garrett, Jesse James. The Elements of User Experience. 2002.
Krug, Steve. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2000.
http://www.nngroup.com/reports/intranet/design/ This report reviews “The Year’s 10 Best Intranets.” Note that the examples are all from large companies in the private sector. The cost is $224 for a single copy.
Redish, Janice. Letting Go of the Words: Writing Web Content that Works, 2007.
See more of our recommended resources.
First of all, thanks to everyone who attended our seminar yesterday at the ASTD-TCC Regional Conference. It was great to meet all of you and J. Hruby and I both enjoyed the presentation and discussion. One point from yesterday really stuck in my mind and I thought it was worth exploring further here in the blog:
When you include users or learners in your review process for online learning, (and most in the seminar agreed that you should!) how should they be selected?
A couple of thoughts from me and then I’d love to hear your comments:
* Beware of reviewers who claim they can “represent” the actual learners! I’m just reiterating this because it was one of the best points that emerged from yesterday’s seminar. Thank you to the participant who shared a story that illustrated the problems that can occur when anyone other than an actual learner tries to speak for the learner.
Managers, supervisors, and highly-experienced employees may be eager to volunteer to be reviewers, but only real learners should represent the learner’s point-of-view.
* Select reviewers who represent appropriate skill and experience levels within the learner base. Area supervisors and highly-experienced employees may be eager to volunteer to be reviewers, but do they always represent the learner’s point-of-view?
* Think about what kind of feedback you want and communicate this clearly to the learner-reviewers. Be aware that when more experienced employees are involved in reviews, they’ll often want to influence the content and how it’s conveyed so that it reflects their experiences, views about how things should be done, etc.
That’s fine if this is the type of feedback you’re looking for and you’re in a phase where the content is still under development. Often times, however, we involve learners because we want opinions about the effectiveness of the content that has already been decided on. Make sure you understand what you want from your reviewers and then communicate that to them clearly.
Again, we really enjoyed yesterday’s seminar. Please share your thoughts and comments.
I believe strongly that corporate cultural fit matters to a person’s success and satisfaction in a job. I screen prospective employees with that in mind. And I’m pretty effective at it, and I dare say proud of the skill. Having shared beliefs about work and how to treat colleagues and customers helps a business run smoothly.
However, some recent reading has reminded me not to be so sure that I’m an authority on corporate cultural fit. Of course it’s important. But at what point can a quest for shared beliefs turn into a quest for people who think and behave just like oneself? And to what degree is corporate culture a U.S. concept that may unwittingly exclude or alienate people from other countries? Here are what prompted these questions:
I’m looking forward to conversations with colleagues in other businesses and other countries about their definitions and practices regarding corporate culture.
For those interested in the connection between the tips listed below and a study of five state Department of Revenue websites, read on past the list.
As part of a recent project, we met with users of a state Department of Revenue (DOR) website and asked them to rate the home pages of five peer sites, focusing especially on navigation:
The users rated Louisiana the highest and Minnesota the lowest. So what did Louisiana get right and Minnesota get wrong? Here are a few summary points.
I spend a lot of time speaking with people who would like to work for Fredrickson. I most enjoy the conversations with those who are curious and always learning. They expand their professional skills in spite of limited opportunities to do so in their current job. Many are artists, musicians, actors, athletes, or mentors outside of work. Or they are otherwise active in their communities or professional associations. Finally, they are interested in learning about other cultures and perspectives.
People with these traits, I’ve found, are often the most adaptable to change and the most productive amid change.
Yet I wonder whether there is still reluctance among job-seekers and employers to acknowledge and discuss how experiences outside of work contribute to what the person can bring to a particular job? Just today, I interviewed a person in the learning and development field. Her past career as a winter sports coach came up in conversation, and I expressed my surprise that she hadn’t included this experience on her resume. She had chosen to omit it out of concern that an employer would form a negative impression of her character and wonder whether she’d be asking for extra time off to pursue the coaching. I hope I convinced her that 1-on-1 sports coaching was directly relevant to 1-on-1 corporate leadership coaching!
I’d like to see “demonstrated life-long learner” become a standard requirement on all job descriptions.
Seeing articles, webinars, and presentations with this title makes me weary.
Yes, budget is important and yes, we should be fiscally responsible and good stewards of money. However, the question of tight budgets for training just doesn’t make much sense to me. The reason it doesn’t is because I’ve always been a firm believer in answering this question first - “Why are we providing training?”
If the answer to that question is that we are providing training to enhance, improve, or rectify some type of business problem, shouldn’t the budget question really be, “How do I show the benefit of this training in terms of the business problem?” If you answer that question, the answer to the money question should follow.
At Fredrickson we’ve long prided ourselves on sharing both what we know and what we think about topics through our seminars, articles, and the Fred Comm eZine.
The addition of the Fredcomm Blog gives us another way to continue to share our thinking about things that matter to us and we hope they matter to you as well.
The Fredcomm Blog won’t be written by one person, and it won’t be for just one audience. Instead, we’ll feature entries from anyone at Fredrickson who has something to say about any of our practice areas, from learning, to usability, to communications, we’ll get around to discussing it all in one blog.
I’ll hope you’ll bookmark this blog and return often.