AI Comes for Creatives

The Celebrated No-Hit Inning,” a 1956 science-fiction short story by Frederik Pohl, contains, as a prelude establishing his premise, a vivid sketch of machines taking over a job once thought open only to humans. An inventor showed up at a major-league ballpark with a robot batter. As skeptical players guffawed, the robot whiffed on the first batting-practice pitch it got, but the inventor said it just needed a minor adjustment and applied his screwdriver. Sure enough, the robot drove the next pitch sharply, and every pitch thereafter. In short order robots replaced every major leaguer, and soon the idea of a human ballplayer competing against a robot became laughable.

Robot athletes are still science fiction, but robots performing repetitive tasks such as on assembly lines, have been around for decades. Many repetitive white-collar tasks have also been automated by computer software. My particular interest is creative pursuits, once thought immune to replacement, where artificial intelligence (AI) is coming on strong and companies are racing to explore and monetize it. Indeed, the tsunami of developments in this field are too much for me to capture in this blog post. But I can offer my opinion.

How did we get here?

If you think about it, we’ve been following this path for thousands of years. The history of civilization consists of the replacement of human labor, first by animals and then by machines. Oxen plowed fields, which replaced foraging; horses carried travelers; looms wove cloth; engines replaced horses. The inevitable result was more output from fewer people, freeing us to pursue other endeavors.

Broadly speaking, these advances have been steady and successful. For example, at one time three quarters of the farmland in the United States was used for growing food for horses, and manure in city streets was an urgent public healthcare threat. We used horses as working animals right into World War II before we could replace their labor with engines. Since then the US horse population has declined by half, but I don’t think they would complain today about their lot as mostly recreational animals or pets, and the farmland is freed up for other crops. Similarly, mechanical devices have completely replaced many categories of manual labor. The overall impact has been great for consumers but not, at least in the short term, for the workers affected. The Industrial Revolution was recent enough that we have stories about the impact of job disruption on the displaced workers. But though we have many more people today, we mostly all can earn a living one way or another, because new jobs have been created that tend to be less dangerous and less repetitive.

Four teenage boys setting pins by hand in a New York City bowling alley, 1910, under the supervision of a boss
Until the 1940s, pin setting was done manually, often by children (as in this 1910 photo) [Wikipedia]

AI is improving at an incredible rate across a huge range of job titles. Are we next?

Admittedly, on a small scale things are not so rosy. As Harry Truman said, a recession is when your neighbor loses his job; a depression is when you lose yours. Replacement hasn’t happened yet in baseball (though automating ball-and-strike calls can’t come soon enough). But AI is improving at an incredible rate across a huge range of job titles. Hay farmers, pin setters, telephone operators, copyeditors, technical typists, technical illustrators—are we next? I want to focus on creative endeavors in general and writing—OK, technical writing—in particular.

A brief history of artificial intelligence is in order. The first generation of AI programs explored the limited world of game theory. It was trivially easy to code an unbeatable tic-tac-toe program, but more complex games proved challenging. A well-known example is chess. Because the set of all possible positions in a chess game is beyond storage even today, chess programs have to figure things out move by move. The first generation of programs were given the rules of the game, the relative value of pieces, and a library of known solid openings. Without “knowing” what constituted a promising line, the programs relied on clever algorithms for how to evaluate a position and select the best move and then brute hardware power (CPU processing speed, and memory) to evaluate positions as fast as possible. Still, it took nearly 50 years using that approach before Deeper Blue, an IBM program running on a supercomputer, defeated former world champion Garry Kasparov in an over-the-board match using regular time controls, and even at that it needed two tries.

Eliza was an early rules-driven natural-language program developed in 1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Eliza was the first chatbot, designed to simulate conversation. Weizenbaum modeled its interaction style on a psychoanalyst, and it worked unexpectedly and disturbingly well: users engaged in personal conversations with Eliza and his secretary even asked to interact with it privately. In a foreshadowing of today’s events, users misconstrued what Eliza was doing.

I once worked for a company that provided building blocks for businesses to write rules-driven decision-making programs. If businesses could codify their decision-making processes, then they could automate those processes. Wouldn’t it be great if an insurance claims adjuster or a bank mortgage officer could have access to a program that always gave expert, by-the-book decisions, never skipping a step or indulging in favoritism? The issues that emerged with expert systems (not from my former employer but in general) were twofold. First, I say “access,” but of course businesses preferred to replace workers. Second, in practice expert systems coded with existing policies and trained on existing data were unwittingly given, and thereafter acted on, past errors and existing biases, either in their algorithms or in the input data. For example, one justice-system program (not based on our software!) designed to recommend pre-trial release decisions and evaluate the risk of recidivism was tougher on Blacks than whites given the same case facts. These kinds of problems left businesses exposed to lawsuits, the very thing they were trying to avoid.

The next generation of AI programs used huge amounts of data to make its decisions. This technology was exemplified by IBM Watson, which was fed millions of documents and given the raw computational power to search through them in real time. In 2011 Watson impressively defeated “Jeopardy!” champions Ken Jennings and Brad Rutter in regular game play.

The next advance in AI was to dispense with programmed rules in favor of a “neural network” that weighed connections in data and produced a “best” outcome. In 2017 AlphaZero, a purpose-built game program designed by DeepMind, was given the rules of chess and set to play against itself to learn what patterns of play led to victory. Crucially, DeepMind engineers played no role in AlphaZero’s development; it developed superhuman chess skill without benefit of a single programmed algorithm or even an opening library. After 44 million(!) self-practice games, the AI crushed Stockfish, the best chess program available, in a 100-game test match without losing a game. (In a thousand-game rematch played in 2018 with conditions more favorable to its opponent, AlphaZero routed the then-newest version of Stockfish with 155 wins, 839 draws, and only 6 losses.) The game logs showed that AlphaZero repeatedly evaluated positions and lines as winning that its opponent evaluated as winning for itself. Not even human chess masters understood AlphaZero’s “thinking.”

The newest generation of AI, which has captured the imagination of computer experts and laypersons alike, combines neural networks with training on huge amounts of domain information. What can it do?

  • In a large study, an AI trained on mammograms more accurately predicted breast cancer risk than human radiologists.
  • College admissions officers already expect that AI-generated admissions essays will figure prominently this year, and are planning to use AI tools to detect AI essays.
  • Companies are already deploying AI screeners to screen job applications.
  • A researcher prototyping AI drug discovery thinks he can reduce the cycle time from two years (and billions of dollars) to two weeks.

It seems no creative job is beyond AI’s reach. In art, photography, interior design, and other creative endeavors, AI’s advances have been stunningly fast, and in some cases have already reached the threshold of professional viability. AI services such as Midjourney, Stable Diffusion, and DALL-E can generate whole images in response to simple text queries. (Imagine using alt-text to generate an entire image.) One man used an AI to generate an image that won first prize in a digital fine-arts competition at the Colorado state fair, to the outrage of other contestants. In an unsettling recent interview, a game designer recounted balking at a graphic designer’s fee of $50 per image and turned to an AI to get what he wanted in a few minutes and for a few cents. How can anyone compete with that?

“Théâtre D’opéra Spatial” (Jason Allen via Midjourney) [https://tinyurl.com/8n64bd76]

In November 2022, the company OpenAI (founded in 2015 and the creator of DALL-E) released its Generative Pre-Trained Transformer (GPT), trained as a large language model using ten trillion data points. ChatGPT generates responses by predicting, based on its input, the next likely word and building forward. Just six months later, ChatGPT is already responding to 100 million queries a week. Companies are using the latest version, ChatGPT-4, to create AI-powered chatbots in a dizzying array of fields. Microsoft, Google, and others are racing to incorporate AI into their core products. (Imagine a Microsoft Word that can not just format your report but write it for you.)

Because of its broad training, ChatGPT displays expertise in a huge range of fields. This infographic summarizes an OpenAI report on ChatGPT-4 and its impressive performance on selected professional examinations.

ChatGPT-4 is impressively adept at qualifying exams in many (but not all) fields. (Image from https://digg.com/data-viz/link/how-smart-chat-gpt-actually-is-visualized-JFITjcxETG)

Copywriters, social-media content creators, and now editors are suffering the effects of AI competition. Scribendi AI is marketed as a productivity tool for overworked copyeditors. AuthorONE is marketed to publishers and offers manuscript assessment. Paperpal is aimed at academic writers, whose work can be formulaic. Sudowrite and Marlowe are aimed at fiction writers; the latter will optionally upload and anonymously store your manuscript in a “research database.” In these and other fields, AI offers—threatens?—to remove the middle folks entirely and generate output on its own, not in days but in minutes.

Issues in artificial intelligence

The capabilities of generative AI confound researchers, who can’t say where the responses are “coming from,” and who wonder if AIs have developed “theory of mind”—that is, are conscious. If this sounds unsettling to you, you’re not alone. On May 29, 2023, the nonprofit Center for AI Safety issued a sensational one-sentence open letter: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”—Center for AI Safety 

It was signed by over 350 AI researchers and tech executives, including OpenAI CEO Sam Altman himself. Microsoft (of all companies!) has called for government regulation of the field. Alrighty then… What are the risks? Off the top of my head I can think of the Skynet scenario and the risk that jobs will be replaced or eliminated faster than workers, and the economy, can adapt to the change. Altman also cited the risk of spreading misinformation. Mark Twain said a lie can travel halfway around the world while the truth is putting on its shoes—and he never saw the Internet.

Like Johnny Five from “Short Circuit,” training an AI requires a lot of input, but it can’t evaluate the quality of what it’s fed. Garbage in, garbage out. Back in 2016 Microsoft trained an adaptive chatbot using general Twitter input and some comedic riffs and set it loose on Twitter to interact with users. The hope was that the program would learn to sound like just another user, but had they ever used Twitter? The results were disastrous: within hours, an organized troll attack corrupted it into spewing racist and sexist chatter, and in less than a day Microsoft pulled the plug. In theory, a small-scale AI could go through your company’s internal information and then answer queries about it, solving the common problem of navigating poorly organized Confluence pages or Sharepoint sites. But as a practical matter I know that regardless of how it’s stored, half the stuff in every company’s internal library is already obsolete. Quickly finding a wrong answer is not an improvement.

Predictive AI is non-deterministic. It produces output one likely word after another, but not the most likely next word. If you ask the same question twice in a row you can get slightly different responses. That’s a feature to avoid making AI sound robotic. But because of this, even OpenAI admits that ChatGPT, while it can create new content, can also “hallucinate” false statements of fact because it’s not actually checking. This is a fatal flaw in every field of endeavor where correct answers are required. Has Roger Federer won five Wimbledon singles titles or eight? Feed an AI a mix of current and outdated information and it might say both. For AI, hallucinations are a feature and a bug. So far, the quality triangle holds: AI produces content that’s fast, cheap, but too often bad.

Even high-quality input, taken from websites instead of chats, or amassed from billions of images scraped from the web, can be problematic. Today AI can create an image simply from a prompt describing the desired result, but many current examples don’t pass careful inspection. The prizewinning image above looks fantastic, but the closer you look the weirder it gets. Today AI can produce photorealistic output but has a hard time with hands. The field is advancing so fast that it might just take a minor adjustment, but it’s not perfect yet.

Where did all that information come from? Is it curated, or just indiscriminately collected? Examples have surfaced of AI-generated art that contains stock-photo watermarks. Josh Marshall of Turning Points Memo points out that much of the visual information that’s been hovered up is copyrighted, thus resulting in generative art demonstrably built through intellectual theft. I stand in solidarity with my graphic-arts colleagues to denounce this dishonest practice!

Similarly, I am personally nervous of startups that offer “help” with manuscripts. This may sound like a valuable service for a great price. But are you submitting your great American novel for editing or harvest? Are you even writing it?

If you ask a generative AI for a sonnet in the style of Shakespeare it can easily provide one because it has the data on what words the Bard used and how he strung them together. If you don’t specify a style subset, the results are still coherent but rely instead on the expressive skill of the crowd, which in practice tends toward the trite. Much worse, because statements of fact are predicted, not looked up, they ain’t necessarily so. Like Weizembaum’s secretary, early adapters of AI are misinterpreting what they’re getting.

In a recent legal case, an attorney in a civil action used ChatGPT to prepare a brief, but the AI actually generated nonexistent case citations and the other side noticed. Another case arose when a journalist trying out ChatGPT asked for a summary of a gun-rights case, and the answer stated that a certain Georgia radio host was accused of embezzling from one of the parties. The man actually had no involvement and filed a libel suit against OpenAI. The first lawsuit from a patient whose cancer goes undetected by an AI is inevitable, even if the AI is objectively better than a human. Conversely, what happens if a human double-checks and mistakenly overrides a correct AI diagnosis?

In the software field we know about “vaporware.” Elon Musk introduced vaporware to the automotive industry by announcing that the Tesla Autopilot was “fully operational” and released Full Self-Driving as beta, which many owners immediately began to use. Subsequent developments showed the technology was far from ready for public use. Autonomous control (definitely not my area of expertise) consists of the vehicle sensing its surroundings, interpreting sensor data, and using those interpretations to makedriving decisions. Cruise control, a simple application, has been available for decades; adaptive cruise control, where the vehicle slows or stops depending on what’s happening ahead, has been available for about ten years. But a fully self-driving car requires highly complex software with hundreds of subroutines making hundreds of driving decisions at unpredictable moments. As yet, it hasn’t been safe enough.

Colleges are already looking to setting AIs to catch AI-generated admissions essays. If existing data is tainted with human error, can’t we just create an AI to generate data for use by another AI? Researchers warn that an AI feedback loop could lead to “model collapse.”

Getting past AI’s formative years won’t end the problems. The invasion of Ukraine is the first full-scale drone war, and just as with airplanes in WWI, both sides have quickly escalated their role from reconnaissance to weapons delivery. So far, the drones on both sides are remotely piloted. But it’s possible to add AI to create autonomous devices that can identify enemy combatants on the battlefield and make kill/no kill decisions. (God help the civilian who emerges from his basement during a skirmish.) Naturally, people are hesitant to delegate life-and-death decisions to computers. As of this writing, the Pentagon hasn’t actually deployed such devices and instead relies on remote operators. But killer robots are entirely possible and for now are restricted only by doctrine.

How can we differentiate our work?

We’ve fought the battle of quality versus quantity for years, so this is familiar ground. I would suggest that it’s foolish to replace one good worker with ten bad ones for the same price, but in the current environment companies increasingly disagree. What, then, makes our work worth paying for?

So far, at least, ChatGPT struggles with English language and literature. We can’t expect our natural advantage to last forever, but we have it for now. Looking more deeply, large language models draw from many examples of common, trite phrases, which is reflected in their output. (Strunk and White is not overweighted in samples.) More deeply still, AI only simulates intelligence. Its output isn’t necessarily logical or even correct. AI output will need as much review, and probably more review, as the work of a human writer.

I worked at a startup whose CEO was reluctant to publish useful technical details because “our competitor will just steal it.” I had a problem with not helping our users and I said so. The STC Ethical Principles specifically calls us to create “truthful and accurate communications” under the core principal of honesty. Until now, members have interpreted this to mean “write facts, not marketing hype.” But today it differentiates us from AIs, because no self-respecting tech writer makes up facts. The ethical standards also call on practitioners to work to help users, implicitly guiding us not to work to hurt them. Given the current state of AI, that’s also a differentiator, because we know the difference between helping and hurting.

There’s a lot of input on the public web by both professional and amateur technical writers. I’ve already seen an attempt to use it to generate technical documentation. It’s easy to generate information in the style of Microsoft because the company has published a widely adopted style guide. But technical facts are company specific. If your company wants technical documentation in the style of your company and accurate for the next release of your product, it will first have to input its existing documentation, which may or may not be public, and then the specs for the next release, which are proprietary, copyrighted material. Otherwise, the new documentation will read like everything else on the web in both style and function. Is your company willing to feed proprietary material to a third party, or develop its own dedicated small language model AI, just to avoid paying a technical writer? That doesn’t seem cost effective or wise. To avoid model collapse, it doesn’t seem like humans can be removed from the content-creation equation any time soon. I like the idea that my work might become the gold standard somewhere!

Let’s assume AI technical output becomes as good as human-written output—cheaper, faster, and just as good. What then?

But let’s look past the period of minor adjustment. Let’s assume AI technical output becomes as good as human-written output—cheaper, faster, and just as good. What then?

As a technical communicator, there were topics I could write in my sleep, and possibly sometimes did. For example, documenting a new object in a management tool that follows the CRUD model of being able to create, read, update, and delete a manageable object meant I could document how to manage a new object by writing an overview of the object and then topics on how to create, display (or read), update (or edit), and delete the object. If the architectural model was well designed and followed, every task proceeded in parallel for each object, so I could copy and paste the text from an existing topic… Admit it, you’re asleep already. Once the development team establishes an established model (or pattern), a new error message, set of permissions, API method, or supported disk drive could be coded, tested, and documented by rote by junior workers. Work that follows a strong pattern and can be done by rote is very likely to be given to an AI. Hint: that includes the code. But as far as writing goes, these topics, which made up the bulk for my later assignments, were never the topics I looked forward to writing anyway. The complaint I’ve had, and heard, for years is that the rising volume and quickening pace of our work is overwhelming. If we don’t have to do the boring stuff, is that actually bad?

I question whether AI is a current threat to our profession. but we have to think of the future. Another approach to the challenge of AI is to level up, not surrender. As one marketing slogan put it: “AI won’t replace you. A person using AI will.” If you can consider AI as just the latest in a line of productivity tools going back to the typewriter, you can take advantage and get through the disruption. (I wonder what to say to new practitioners, as the skills required to enter the profession are increasing still further; but they don’t have to unlearn old skills.)

It’s going to be a bumpy transition. But I think we will get through to the other side. Perhaps there are higher pursuits to which we can turn our attention.

On to Retirement

Steve seated at his office desk wearing a Honeywell centennial sweatshirt
On my last day of work, 31 December 2022, wearing a sweatshirt from my first full-time job 44 years earlier

On New Year’s Eve 2022 I walked out of a deserted office and into another phase of my life. I don’t think of it as going “off” to retirement but rather going on to something new.

Some retirees say they got out “just in time.” My wife, who retired from a hospital lab in January 2020, absolutely did. Back then the world was weeks away from recognizing the coronavirus as a clear and present danger with no vaccines or even effective treatments available. Everyone was anxious, but front-line medical workers became rightfully frightened. Me? I was fortunate enough to continue working full-time at a white-collar job from a home office. Nobody in our house caught covid (knock wood) and my employer never pressured anyone to return to our office. I liked my work and I liked my boss in California, and I had an ergonomically sound setup and a ten-second commute. I could have continued indefinitely. In fact, I know people who have.

But while I was learning how to document microservice applications, single-sourcing from DITA to multiple outputs, in the back of my mind I still remembered how to paste up camera-ready copy. The accumulation of new knowledge on top of old was piling up. (This cultural reference is actually before my time, but I didn’t want my brain turning into Fibber McGee’s closet.) I wanted to stop while I could still point to my work with pride. And the economic calculus of retirement is complex: the longer you work, the greater your Social Security payments and the bigger your nest egg, but the less time you leave yourself to enjoy it. I got my first tech-writing job right out of college and I was still working past my full retirement age. I wanted to reclaim my time.

Looking back at my career, I wrote thousands of documents for dozens of products for a succession of employers. True, the audience was never large, and all but my most recent works for hire are already superseded. Also, I admit that technical writing is to writing as military music is to music. Deathless prose it wasn’t. But it was clear, concise, and correct.

STC has a strong educational mission, and over the years I’ve never understood those English departments that spurned our professional outreach offers to work with their students, as if applied writing was somehow ignoble. But as I told more than a few college classes, my career demonstrated that you could earn a good living as a writer. In the final accounting, it took both our incomes and decades of work to accomplish, but my wife and I bought and paid for two houses, put our three children through college, and drove (mostly) new cars. We ended with no debt and enough of a nest egg that we could afford to stop working altogether. Flex? Facts.

I don’t think I got out just in time. I don’t think the profession is in danger, just in its typical state of flux. In that regard, I can attest that technologically, everything has changed in the last forty-plus years, and yet nothing about what we do has changed. Despite the burgeoning capabilities of AI (about which I’ll write more soon), the need to explain technical products and services to users at a human level remains; if anything, it’s greater than ever. Digital technology in particular has passed Arthur C. Clarke’s point of magic; consumers don’t know how any of it works, they just expect it to work. We pay $1000 for a smartphone but only know how to use ten percent of its features (and I may be overly generous here). To explain these complex products clearly; to help people make full use of what they’ve paid for; to describe how they can get things done; all that remains the job of the technical communicator. I leave it to others to carry on that long tradition.

What will I do now? I have a lot of books on my shelves I bought but haven’t read; a lot of movies I added to my wishlist but haven’t watched; and a lot of adjectives and adverbs I have in mind but haven’t written. I’d like to trade cold facts for warm fictions and personal opinions—ooh, and maybe sometimes in complex sentences!

I’l let you know how I’m doing.

A Fresh Start

Disposable mask

2020 has been an awfully eventful year. I’ve been fortunate enough to work from home since March, and we’ve essentially been hiding in our home since then. I bought a new car in September and haven’t yet put any gas in the tank. (It’s a Prius Prime, but come on—it still shows the tank as full!) I’m politically active, but in the last few weeks I’ve spent more time sleeping and less doom-scrolling.

Over the last two years I served back-to-back terms as president of STC New England, and my blogging energy went into announcements on the chapter website. Now I’ve handed over the reins and ascended to the post of immediate past president. At the same time I’ve moved this blog to WordPress. So it seems like a good time to start blogging for myself again.

At this point I’m not running for Society offices and I’m not looking for a new job. I don’t know if my new circumstances will allow me to blog more, but if I do, at least my posts won’t be quite so focused on STC and self promotional.

STC Election 2018: Who I voted for

The annual STC election, for members of the Board of Directors and the Nominating Committee, is underway. If you’re a member, you should have received a link to the election website (send email to stc@stc.org if you have not). We have until 9 March to vote, and it’s important to do so. The Society needs our active involvement. You can find out more about the election slate here.

For what it’s worth, here’s who I voted for this year.


For Vice President: Ben Woelk

Headshot of Ben Woelk
Ben Woelk

Ben is a President’s Award winner, an Associate Fellow, and has been very active in his chapter, the Spectrum regional conference, the Society (particularly the scholarship committee), and the Community Affairs Committee on which we both serve. He has generously shared his time and knowledge with many of us. I agree with him that the primary challenge the Society faces is demographic, not just technological. If he approaches the role with the same vigor with which he has approached the campaign, he will accomplish a lot!


For Secretary: Kirsty Taylor (incumbent)

Headshot of Kirsty Taylor (from STC Board photo)
Kirsty Taylor

Kirsty is running unopposed for re-election, but she is a long-time active member who has been an effective Secretary. To use a cliché, consider that she has put her money where her mouth is by traveling to Board events from Australia. Now that’s commitment. She has earned my continued support.


For Director: Ramesh Aiyyangar and Alisa Bonsignore (incumbent)

Headshot of Ramesh Aiyyangar
Ramesh Aiyyangar

Ramesh is a past president and fifteen-year active member of the India Chapter, an Associate Fellow, a Distinguished Chapter Service Award winner, and, as I can attest, an energetic and effective member of the CAC. He recognizes the importance of moving past retention to a growth strategy for STC by expanding both our reach and our breadth. He would bring a welcome and important perspective to the Board. His work ethic, dedication, and perseverance continues to earn my vote.

Headshot of Alisa Bonsignore from official Board photo
Alisa Bonsignore

Alisa, who is running for re-election, recognizes the importance of the CPTC program, which of course appeals to me. But she also recognizes that membership and revenue growth, not budget cutting, are the keys to our future.

 


For Nominating Committee: Jackie Damrau and MaryKay Grueneberg

Headshot of Jackie Damrau
Jackie Damrau

Everybody in STC knows Jackie, who has been active at the Society level forever, it seems. This is an ideal attribute for a member of the Nominating Committee, and indeed she has served on the committee before. Jackie has also been a chapter president, regional conference manager, SIG leader, and member of multiple Society-Level committees. Unsurprisingly in light of her contributions to the Society, she is a Fellow. I admire her ongoing willingness to serve at the chapter and Society levels.

Headshot of MaryKay Grueneberg
MaryKay Grueneberg

Like me, MaryKay has been a technical writer since she graduated from college. She has been a chapter president and has served on several Society-level committees. She is an Associate Fellow. I think she has the appropriate breadth of experience and contacts to go a good job on the committee.

STC Election 2017: Who I voted for

The annual STC election for members of the Board of Directors is underway. If you’re a member, you should have received a link to the election website (send email to stc@stc.org if you have not). We have until 10 March to vote, and it’s important! The Society needs our active involvement. You can find out more about the election slate here.

For what it’s worth, here’s who I voted for this year.


For Vice President: Craig Behr

Formal photo of Craig Baehr
Craig Baehr

Craig, a 25-year practitioner, an academic, a current Director, and an Associate Fellow, has made important contributions to the certification program and the Body of Knowledge, two of the Society’s most important initiatives. He has been published in both Technical Communication and Intercom. His recognition of the importance of volunteers and mentors resonates with me.


For Treasurer: Tim Esposito

Formal photo of Tim Esposito
Tim Esposito

Tim, an Associate Fellow, is on the Society’s budget review committee. He has extensive experience as president of the Philadelphia Metro Chapter (PMC) and past treasurer. He has also helped organize regional conferences. I’ve worked with him for the last year as part of the Community Affairs Committee (CAC) and found him energetic, responsive, and committed.


For Director: Ramesh Aiyyangar and Jessie Mallory

Headshot of Ramesh Aiyyangar
Ramesh Aiyyangar

If a board is made up of people with similar backgrounds and experiences, they can find it difficult to consider other viewpoints. Fortunately, this year three excellent and varied candidates can help avoid this problem. Ramesh is a long-time technical communicator, a past president and active member of the India Chapter, an Associate Fellow, and a member of the CAC. He recognizes the importance of a growth strategy for STC and expanding both our reach and our perspective. Ramesh ran a gallant petition campaign last year, and his dedication and perserverence earned my vote this year.

Formal headshot of Jessie Mallory
Jessie Mallory

Jessie has been very active in a relatively short time. She’s already served as president of PMC, and now coordinates social media for the BOK committee.  She recognizes the importance of student members and young practitioners, and I agree with her approach of directly asking us how to make the Society better. I think  she will bring invigorating youth and energy.


For Nominating Committee: Larry Kunz and Grant Hogarth

Candid photo of Larry Kunz
Larry Kunz

I’ve known Larry for many years. More to the point, a lot of people know Larry, and he in turn knows a lot of people. This is an ideal attribute for a member of the Nominating Committee. Larry is an active and influential blogger with an excellent grasp of the state of the profession and the Society. He is a Fellow and a President’s Award winner. I admire his energy and dedication at the chapter and Society level.

Candid headshot of Grant Hogarth
Grant Hogarth

Between two other worthy candidates, I made my second choice on the basis of geographic diversity. Grant has over 25 years of professional experience, has served the Society as a chapter president and ITPC judge, and is active in other nonprofit organizations.

Unboxing Day

The unboxing phenomenon lets us vicariously enjoy the process of receiving and opening a new product by watching videos posted by other people. Unboxing videos are very popular: Unbox Therapy has over two million YouTube subscribers, and this video garnered over two million views in less than two weeks.

There’s a sensuous feel to unboxing videos, because some products are elaborately packaged. We may never even get our hands on some of them. For example, “Weird Al” Yankovic posted a video of him unboxing his 2015 Grammy award for “Mandatory Fun.” (Vicarious and hilarious!)

Another class of video involves instruction on or demonstration of product installation and setup. Just as we once watched Julia Child or Bob Ross show us how to do things we didn’t know how to do, we can watch these videos to learn how to install or configure complex products. As someone who makes a living in part describing how to install and configure products, I’m interested in unboxing videos, and more so in installation videos. They give us a direct view of how consumers open, install, and set up products. It’s particularly relevant to consumer hardware, but software videos are increasingly available, and we can learn from them as well.

This Unbox Therapy video shows the unboxing and setup process for an Apple Watch. The effort Apple puts into their packaging is appreciated in at least some quarters (as of this writing the video has been viewed nearly two million times on YouTube).

Screen capture from installation video on YouTube: a hand holding up a Nest thermostat, still in its box, near the thermostat to be replaced
Unboxing and setup video for a Nest Learning Thermostat

This video shows the installation of a Nest thermostat. If Nest is smart—and I’m sure they are!—they’ve carefully analyzed this and other third-party videos involving their products. Why? First, although the “official” Next installation video is also on YouTube and more popular (viewed over 420,000 times as of this writing), the unofficial one has still garnered over 46,000 views as of this writing, and if it’s inaccurate, it could cause problems for the company. But also, even if it’s accurate, seeing how the product is installed from scratch in the real world by a real customer provides invaluable information. Many of us have had the experience of opening and assembling a laptop computer with both hardware and software components, developed separately and perhaps tossed into the same box. (There’s a story from DEC about a system that was shipped in one crate, but with three separate documents labeled “Read Me First.”) It’s a good idea to audit a first-time user’s initial experience, and an unboxing video affords us that opportunity. Installation procedures are painstaking, and usually we only have the energy to document the mainline, everything-works procedure. How much better the instructions would be if we knew of, say, the ten most common user errors and could head them off!

Chosen at random, here are two third-party videos of software installations. In Microsoft Dynamics CRM 2016, an experienced installer encounters and calmly works through multiple issues in this complex installation that might otherwise halt the process and trigger a support call. In Windows Server 2012, the installer walks through a maze of decision points that would make my head hurt trying to describe (but in this case the video might benefit from the time-compression techniques employed in “The French Chef”).

As technical communicators, then, what can we learn from unboxing videos?

  • That they may exist for our products right now, and that our customers may be using them
  • How our product is actually packaged and shipped, and how our customers deal with unboxing
  • How customers actually install and set up our products
  • How long steps take
  • Where points of confusion or error arise in the field

I hope you’ve received some nice products this holiday season and are enjoying unboxing them!

Steve Jong for STC Director

Photo of Steve Jong at the podium of the 2011 STC Summit

6 December 2016

If you are a current STC member, I have a personal favor to ask. I ask you to sign my nomination petition to appear on the ballot as a candidate for Director at Large of the Society in the upcoming Board election. As specified in Article VIII, Section 2, Part D of the STC Bylaws, I must collect some 600 member signatures in the next month to get on the ballot.

Why do I need to take this route? Well, I was vetted by the STC Nominating Committee, but not selected for the preliminary slate. You know my qualifications: I’ve served as an STC Director at Large and chairman of the Society’s first Certification Commission. I’m a 40-year practitioner, a 30-year member, an Associate Fellow, a past chapter president, and a President’s Award winner for my dedication and leadership. I have managed doc groups and led multiple non-profits. I have experience, and also a unique perspective as someone who understands STC both from top to bottom and from inside and out, and who can help effect the changes we need to survive and thrive.

Signing the petition does not commit you to voting for me in the election, but it does support my opportunity to serve you by letting me appear on the ballot. If I am so honored, I will campaign as a regular candidate. But I pledge to you that I’ll work as hard for STC this time as I have in my past roles—and as hard as I’m working right now to get that chance.

If you’re a current member, please sign the petition. Go here for more information on my platform.

Finally, whether you’re a current member or not, you can help me reach my signature goal by forwarding this message and the petition URL to your own network of contacts: http://www.ipetitions.com/petition/steve-jong-nomination-by-petition-for-stc

Thank you so much for your consideration and your help!