Friday, 3 November 2017

Implementing PDF Exports

At some point nearly every SaaS company receives a feature request to export user data. The request usually comes in the form of “I want to export my data to a <Microsoft Office Product> file.” Here at Sprout Social we’ve gotten these requests before and today we support CSV and PDF exports for nearly all of our reports. Implementing CSV exports was relatively straightforward. Implementing PDF exports, on the other hand, was a more complicated beast. In this article I want to share with you the history of Sprout Social’s PDF exports and some of the issues we ran into in the hopes that it may help you should you choose to go down a similar path.

Beginnings

Early in Sprout Social’s life sometime in 2010, we received one of those data export requests. Our users wanted PDF copies of our reports so they could easily share data with teammates that didn’t use Sprout Social. Unfortunately for us, the options for generating PDFs at the time were fairly limited. There were a few command-line tools but they weren’t very flexible and didn’t offer great CSS support. Many browsers at the time offered the ability to print a web page directly to a PDF file, but it was a bit cumbersome for users and it was very difficult to add print styles to your page if it wasn’t designed that way from the start. So instead of an existing solution we decided to build our own, and shortly thereafter, Papyrus was born.

Papyrus

Papyrus is the internal name for our first PDF report generation service. It’s a Java service that accepts a JSON payload as input and uses a library called iText to generate a PDF. Although some of the details are a bit complicated, using Papyrus to generate a PDF is relatively simple.

iText uses an XML-based markup language and a subset of CSS to create and style PDF documents. We know the layout of our PDFs beforehand, we just don’t know the content. Using Mustache, we can create templates of our reports that can be filled in with user data at generation time. Once we combine a user payload with the template to produce the full markup, iText can generate a PDF document to return to the user. We employ a few tricks to generate the PDFs—such as using Rhino and Highcharts to generate graphs—but a majority of the heavy lifting is done by iText. Most of our work lies in creating the templates for each of the reports.

While Papyrus has the benefit of simplicity, it also has a few drawbacks. Most notably, the templates are onerous to create and difficult to match to designs. We’re also forced to duplicate display logic in the markup and on the front-end, meaning that both back-end and front-end developers have to be involved in creating and modifying the reports. Because of these drawbacks, we started searching for alternatives in early 2014.

PhantomJS

By 2014, PhantomJS was becoming increasingly popular in the web development world. Most usage was focused around browser automation and testing, but one of its lesser known features is its ability to perform screen captures. Relevant to our use case, it can capture the contents of any web page in a PDF file. Using this feature we set out to build a service that would generate PDF reports based on the contents of the report’s page in our app.

We soon had a prototype for a new PDF generation service that could take screenshots of our existing reports. It wasn’t an out-of-the-box solution, however. We had to modify several parts of our application to make the reporting pages compatible with the way we were using PhantomJS. Some of those changes included:

  • CSS workarounds. PhantomJS 1 is based on older versions of WebKit, which led to a lot of our CSS not working in PDF mode. In most cases, we had to fall back to using IE9 workarounds for PhantomJS.
  • A Function.prototype.bind polyfill. PhantomJS 1 notoriously doesn’t support Function.prototype.bind even though it implements most of the rest of the ES5 standard.
  • Fonts. If you search for “PhantomJS fonts” you’re likely to come across an article that will show you how to get PhantomJS to recognize local fonts. Put the fonts in /usr/share/fonts/truetype and then run fc-cache -fv. That works great until you also run into the issue where PhantomJS doesn’t implement the CSS font-family declaration correctly. This issue wasn’t found until we were in production and our Typekit fonts failed to load.
  • A custom version of the reporting page. If PhantomJS took a screenshot of the report as-is the PDF would include a lot of unnecessary content such as navigation bars, headers, and footers. The page also wouldn’t look very good because the contents weren’t optimized to fit on a standard PDF page. In order to work around this we created another web page that would only render the content necessary for the PDF, and in a layout that made sense for a PDFs. This meant we had to duplicate some layout code, but a majority of the components (graphs, charts, media objects, etc) were still able to be re-used.
  • Authentication. Because PhantomJS didn’t have the user’s cookies we had to choose between side-loading data on the page or finding a way to authenticate PhantomJS to make API requests on the user’s behalf. Because of security concerns at the time we opted to side-load the data onto the page. That meant the front-end would have to gather all of the necessary data and ship it to PhantomJS when exporting a report.

The workflow turned out to be rather complicated, but it worked.

  1. The user initiates a PDF export and the front-end gathers the required data in a JSON payload.
  2. A request is sent to the PDF service, which starts an instance of PhantomJS and points the browser to the reporting page.
  3. The user payload is injected onto the reporting page and the page uses the data to render the report.
  4. PhantomJS captures the page in a PDF that is uploaded to S3.
  5. The S3 URL is returned to the client and the PDF download is initiated.

It lacked the simplicity of Papyrus, but it alleviated some of the frustrations we had with Papyrus. Not only were the reports as vibrant as the web versions, but now all of the logic for PDFs lived in the web code. An entire report could be designed and implemented by the front-end team, making them easier to develop and easier to ship. Seeing the potential in the new method, we sought to improve the service.

The New PDF Generator Service

After working with our PhantomJS-based service for a while, we started to identify some areas that would could improve the workflow. Most notably:

  • Testing PDFs was difficult. Because the way the service generated report URLs wasn’t configurable, developers had to set up their own instance of the service in order to test reports outside of production.
  • We weren’t utilizing PhantomJS to its full potential. Our prototype worked, but we soon realized that PhantomJS had features that could simplify our workflow. For instance, the onInitialized hook would allow us to inject data directly into the page instead of uploading it to a server only to have the page re-download it. We also never properly enabled the PhantomJS disk cache, which would cut down on page load times if we configured it correctly.
  • The service used a fixed version of PhantomJS. We sometimes upgraded the version, but we had to upgrade every report at the same time. Making the version configurable would allow each report to operate independently of the others.
  • Error handling was not a first-class concern. It was incredibly difficult to debug Javascript errors that occurred on the PDF reporting pages.

Using what we had learned from the first version we began to implement version 2 of the PhantomJS PDF generation service. We took a deep dive into PhantomJS’s documentation and source code and utilized more of its features. We were able to inject data directly into the page and enable the disk cache which resulted in our generation times dropping by as much as 40%. We made nearly every aspect of the service configurable, from the version of PhantomJS used to the URL of the report to the generation timeout.

In version 2 we made large strides in our error handling, since this was our biggest pain point. We utilize every error hook available in PhantomJS to ensure that any and all errors are captured in the log files. Errors are categorized by where they happen and how serious they are. They’re also given error codes to return to the client to help debug customer issues in production. Any request that fails in production is logged along with the contents of the payload, allowing us to reproduce the request later if needed. We also have a test page that sends raw payloads directly to the PDF generation service, allowing us to bypass the UI and the API when reproducing customer errors and reducing the amount of time it takes to find the cause. Because of the increased error-handling surface area, we saw our service losses go from one or two a month to zero in the last 16 months.

As part of our refactor we also modified our front-end code to create payloads that were smaller. Instead of sending raw request data to the service—most of which wasn’t used—we began to send processed, aggregated data. In some cases we cut down the payload size by a factor of 10. These changes combined with the efficiencies mentioned above means that reports are now taking 5 to 6 seconds to generate instead of the previous 20 to 25 seconds. And that time continues to decrease as we continue to make optimizations and switch more of our rendering logic to React.

When we finished the wave of improvements, the workflow was nearly identical. But by tackling some low-hanging fruit we were able to lower request times, lower error rates, improve the debugging experience, and expand the feature set. And using the new ability to specify a PhantomJS version enabled us to update our reports to PhantomJS 2, which is not only faster but also requires fewer CSS and JavaScript workarounds to generate our PDFs.

Since we launched the new PDF service 16 months ago updates have been few and far between. Its flexibility has allowed us to add new reports without any changes to the service. And the reliability of both the service and PhantomJS 2 has allowed us to start designing larger features around PDFs without worrying about scalability. This isn’t the final chapter in the book of PDFs at Sprout Social, but we are in a good place and we’re excited to see what the future holds for us and our customers.

This post Implementing PDF Exports originally appeared on Sprout Social.



from Sprout Social http://ift.tt/2yrvB82
via IFTTT

#SproutChat Recap: Planning Social Content for the Holidays

With the start of November, the holiday season is on everyone’s minds. For some, this means increased engagement and customer service inquiries on the rise. In this week’s #SproutChat we talked about planning social content for the holiday (plan early, plan often), UGC and global recognition of holidays.

Start Planning Early

Try to get ahead of the hectic season and avoid planning holiday content as it comes. Even if the holiday season is not your brand’s busy time, it’s likely clients and team members will be out of the office, so allow yourself the freedom later on by tackling things early on.

Engagement Will Vary

Depending on your brand and organization’s objectives overall, engagement may fluctuate. It’s likely if you have a commerce component that you’ll see an uptick in customer service inquiries, so make sure your customer care plan is buttoned up.

Recognize Your Audience

If you handle a global brand’s social presence, be cognizant of how your holiday messages may come across. Avoid alienating portions of your audience by focusing on sales content that’ll drive those end-of-year numbers.

Show off Company Culture

For B2B companies, the holidays are a great time to showcase your company culture. In a season where your overall engagement may be going down, take this opportunity to tap internal employee advocates to their share visuals of office outing and happenings.

Join #SproutChat next Wednesday to chat with Sprout All Star Elite, Kellen McGugan, of BigWing, about overcoming challenges in agencies. Until then be sure to join our Facebook community to connect with other brilliant folks in the industry.

This post #SproutChat Recap: Planning Social Content for the Holidays originally appeared on Sprout Social.



from Sprout Social http://ift.tt/2lRb6f9
via IFTTT

How to Find Which Sites are Driving Retention

I’ve previously written about how to use Kissmetrics to find which backlinks drive signups. I wrote that article because we all know backlinks are great for SEO, which is great for traffic, but what really matters is the quality of traffic you’re getting. So, what that post explained was how you can use a Funnel Report to see who came to your site, and how many of them signed up. We then segmented that traffic by the first ever link that sent them to our site.

It’s a nice, handy way to use Kissmetrics to help provide some insights and potentially future campaigns.

But what about the step after the first visit or a signup? What about retention? How do you find which sites are sending you the visitors that keep coming back?

The idea for this post came to me from my own experience. I’ve been using DuckDuckGo (DDG) lately, and one day I simply entered weather just to see what would return. I saw that DDG uses a site called DarkSky, which is one that I’ve never heard of, even though they have the #1 paid weather app in the App Store.

I liked the layout of the site, it’s ad-free content, and the forecasts have been pretty accurate. Now I use it as my primary weather site.

So this had me wondering – if I was working at DarkSky, how would I know where people are coming from? And of all the traffic channels (direct, organic search, DDG, etc.) that are sending us traffic, how could I track that to see which sources brought the highest retention? In this case, we’ll refer to retention as simply coming back to the site after their first visit.

So, here’s how to find that out using Kissmetrics.

The Cohort Report

Kissmetrics is full of reports that each serves a different purpose. Some can be used for analyzing customer acquisition campaigns; others can be used for retention analysis. And some can be used for both.

The Cohort Report is primarily used to track retention (some even use it for conversion rates). It groups people together based on similar attributes and tracks their behavior overtime. In our case, we’ll be grouping the people that have visited our site, and we’ll group them by the domains they were first referred by.

 

The set up is pretty easy. We’ll set our conditions for those that have Visited Site and Visited Site. We’ll then segment by the first referrer:

KM Referrer is simply the referring URL that brought traffic to your site. If a visitor came to your site via a Google search, the KM Referrer would be www.google.com. If they came from the Kissmetrics Blog homepage, the KM Referrer would record as blog.kissmetrics.com.

It’s also important to note that we’re tracking people on a week-to-week basis. This means that each week is a “bucket”. All visitors that came from Google in the last 6 months are put in the www.google.com bucket, then tracked each week. If they visit in the second week after their first week, they’ll be placed in that bucket. If they don’t return in the third week but do in the fourth week, they’ll appear in the bucket for the fourth week as well.

Now that we got that cleared up, let’s run the report and get our data:

The key thing to look for when viewing our Cohort Report is that the darker the shade of blue, the greater the retention.

So it looks like organic search from Google is sending us the most traffic.

However, we see our highest retention is the 52 people that came from nytimes.com. To me, this data signals that we should spend more time trying to press coverage. SEO is always great, and it has good retention, but nothing beats the traffic coming from nytimes.com.

So What Does All This Mean?

Traffic is the first step. The second step is retaining that traffic by getting those people to come back. Find what percentage of new users come back (using the cohort report) and then see where you’re getting your above-average retention (with a significant amount of traffic). That’s where the Cohort Report shines – showing you where you’re underperforming and outperforming against your baseline retention.

Conclusion

Traffic is great. Signups are even better. But the most important part is retaining those new users. That’s the only way to build quality traffic and an audience.

So how do you measure your progress on retaining users?

This is where cohort reports come in. Specifically, the Kissmetrics Cohort Report (which was the example we used here). It’s segmentation flexibility (you can group people by whatever you track), along with our person-tracking analytics means that you get not just numbers, but also who you are retaining and where they came from.

This post really started out to answer a question – how would DarkSky (or any other site) know if the traffic they get from DuckDuckGo (or any referrer) is being retained? And, perhaps at a higher level, how would they know if they’re even getting traffic from DuckDuckGo? I wrote that post to answer this question. To recap, in two steps:

  1. Run a Cohort Report, segmenting your group by their original referrer domain.
  2. See how many of the people that come from that original referrer by viewing each bucket across the row in the report’s data.

Any questions? Let me know in the comments.

About the Author: Zach Bulygo (Twitter) is the Blog Manager for Kissmetrics.



from The Kissmetrics Marketing Blog http://ift.tt/2Airp7F
via IFTTT

Wednesday, 1 November 2017

10 Facebook Live Tips to Follow Before, During & After Your Broadcast

Michelle Obama shades Trump with advice on how to tweet

TwitterFacebook

As a former first lady Michelle Obama knows how much weight words can carry. She gets it. But she wants to make sure everyone else does, too. Everyone.

Queen Michelle sat down with poet Elizabeth Alexander to speak at the second day of The Obama Foundation Summit in Chicago. She spoke on the importance of sharing your voice intelligently and hmmmmm we wonder if she was referring to anyone in particular...

"When you have a voice you can't just use it any kind of way ... You don't just say what's on your mind. You don't tweet every thought," Obama said. Judging from the immediate burst of laughter in the crowd, one would assume she was throwing some shade at the current Tweeter-In-Chief, Donald Trump. Read more...

More about Twitter, Conversations, Politics, Culture, and Donald Trump


from Social Media http://ift.tt/2z5orWd
via IFTTT

How a Structured E-Commerce Testing Plan Leads to Quick & Stable Wins

If you were Amazon.com CEO Jeff Bezos, how would you structure your testing and experimentation process to drive growth?

Let’s look at what Bezos says about experimenting (emphasis mine):

“One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins. To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there.

Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a 10% chance of a 100-times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of 10. We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold. Big winners pay for so many experiments.”

As CEO of Amazon.com, if not the world’s first, than certainly the largest, and the most successful ecommerce business (which by now is involved in industries far beyond retail), Bezos convincingly puts forward the case for adopting a test culture in any ecommerce environment.

In this post, we’ll look at how you can structure your in-house ecommerce CRO program and create a testing plan that grows with your organization.

You might not be Amazon… but why not swing for the fences?

Plan to Fail (and Learn From it)

The process of conversion rate optimization, or CRO, aims to make ecommerce companies more profitable by increasing the proportion of purchasers to total visitors.

A structured process — encompassing research and hypothesis creation, testing itself, and the prioritization and documentation of those tests — is crucial to creating a testing culture that produces sustainable long-term results.

In most of these steps, the need for a plan is obvious. But most people don’t plan for the testing phase. In fact, testing is frequently regarded as an end in itself.

However, testing is just the culmination of the entire process that stands behind it. Its real end goal is to increase revenue.

In the same way that it’s not possible to formulate and create tests without prior research, it’s also not possible to run tests without planning. And moving from conducting individual tests or a sequence of tests to full-scale, constantly active testing is what separates a one-off CRO sprint from a thought-out, deliberate CRO program.

Guess which approach is better for establishing a testing culture that enables companies to grow while absorbing their mistakes?

Making mistakes and failures as an integral part of growth means embracing the main components of any learning process. Each experiment, no matter how successful or unsuccessful, is a learning opportunity for you and your organization. Implementing and integrating the knowledge that results from your tests is one of the primary tasks of a viable CRO testing program.

Just a few reasons you should structure and document your testing program…

  • Testing every aspect of your website also enables you to challenge your prior assumptions by grounding alternative assumptions in data — instead of opinions or wild guesses.
  • Experimentation allows you to estimate the results of all improvements in real time, without having to wait for the end of the quarter to see improvement (or lack thereof).
  • By applying deliberate structure to the testing process, you make it easier to follow, teach, and repeat.

All of this makes conversion optimization testing a pivotal consideration for any business with ambitions of growth. One of the most efficient ways to set yourself up for ecommerce CRO success is to establish an ongoing process within your organization, with a specific, dedicated team.

This requires you to consider CRO not as an a la carte service provided by an agency, but as an opportunity to institutionalize and embrace the CRO process. And it requires that you learn to conduct tests yourself.

Why is a Testing Program a Necessity?

Note: If you want to test one hypothesis at time, you can go ahead and skip this section.

Why? If you’re running one test at a time, your testing plan and program will be the same as the hypothesis prioritization list (which we’ll talk about below). There’s just one small issue that may bother you — the time required to put all your hypotheses to the test.

If you choose to go the one-test-at-a-time route, be prepared to spend some time on the journey. The best-case scenario, if you have 25 hypotheses to test, is that you’re looking at two years of testing. Why would it take two years? The recommended practice is to run each experiment for at least a month (or until the test reaches significance and/or covers a few buying cycles) to ensure valid test results.

“Significance” is a statistical concept that allows you to conclude that the result of an experiment was actually caused by the changes made to the variation, and not by a random influence. It’s key to ensuring that tests are actually valid and that their results are sustainable and repeatable.

Alex Birkett, Content Editor for Conversion XL, explains the concept of significance a bit more in-depth:

“What we’re worried about is the representativeness of our sample. How can we do that in basic terms? Your test should run for two business cycles, so it includes everything external that’s going on:

– Every day of the week (and tested one week at a time as your daily traffic can vary a lot)

– Various different traffic sources (unless you want to personalize the experience for a dedicated source)

– Your blog post and newsletter publishing schedule

– People who visited your site, thought about it, and then came back 10 days later to buy [your product]

– Any external event that might affect purchasing (e.g. payday)”

The 1-month rule above holds true for most websites. Those with exceptionally high traffic (ranging into millions of unique visits) will undoubtedly be able to achieve significant results within shorter periods. Still, to eliminate every outside influence, it is best to let tests run for at least a full week or two.

Say you have 37 different hypotheses to test. Your ideal aim is probably to create all 37 tests and conduct them all at once, as an alternative to going through the process of testing one by one.

Sadly, this isn’t possible either, for a different reason. Sometimes the experiments themselves will conflict with one another, limiting their usefulness or even invalidating each other’s results.

Since none of us want to be old men when our conversion optimization efforts reach fruition, we need an alternative. That’s where the concept of testing velocity comes in. Testing velocity is an indicator of how many tests you conduct at a given time frame, such as a month. It is one of the metrics of testing program efficiency and higher the velocity you achieve, the quicker your program will bring increased revenue. Provided, of course, you do everything right.

This is the simplified process of creating a testing program

The Building Blocks of Your Testing Program

The main elements that will determine the dynamics of your testing program are:

  1. Traffic volume
  2. Interdependency of tests
  3. The ability to support the design and implementation of multiple tests at once (operational constraint)

Let’s quickly go through what each of these elements means.

Traffic Volume

Traffic volume is an obvious obstacle, since your website traffic will influence not only what types of tests you can run, but also how many concurrent tests, and which pages will draw enough traffic to support tests.

Traffic volume is the reason to prioritize tests that have the greatest projected effect. Tests with higher expected lift have much lower requirements in terms of the sample size/traffic volume needed to reach statistical significance.

In practice, this means that if we expect a test to result in an increase in conversions of, for example, higher than 25%, we will need fewer observations to confirm this expectation than if we were expecting a 10% increase. This is the consequence of using a T-test as the statistical engine for running experiments: the smaller the effect of a change, the larger the sample needs to be in order to eliminate all outliers and reach statistical significance and confidence.

Interdependency of Tests

The ability to run experiments concurrently is the function of each experiment’s dependency on the others. What does this mean?

The basic principle is that we want to test a new page treatment on the maximum available number of visitors. If you happen to set up an experiment that will filter people out of the next experiment, then you will not be abiding by this basic principle.

If your visitors are split 50% on an initial page, meaning that half do not get to see the next page that’s also being experimented on, you will not have a valid test result.

For example, you may want to improve your funnel. So you create experimental treatments (variations) that will run on two different steps of the funnel. This may mean that the visitors that are shown one page do not get to see the other — because the experiment’s outcome has influenced how many people get to see the other experiment you are running.

Your sample will automatically be 50% smaller, meaning the test will have to run twice as long as it otherwise would have needed to achieve significance.

Running concurrent experiments can cause interdependency issues

To prevent this issue, estimate the interdependency risk prior to creating an experiment, and run interdependent experiments separately. You can sometimes solve this issue by using multivariate tests (MVTs), but sometimes your traffic volume will preclude this. Additionally, too many variants in MVTs can invalidate the experiment results.

Operational Ability — How Many Tests Can You Design and Actively Run?

In an ideal world, we would all be testing all the hypotheses we’ve created just as soon as the research is complete!

However, creating and running an experiment is hard work. It requires efforts from multiple people to create a viable and functional test. Once the research results are in and you have framed your hypothesis, the experiment won’t just spring into existence.

Making an experiment requires preparation. At minimum, you need to:

  1. Sketch out an updated visual design, which you’ll use to create a mockup or high-fidelity wireframe
  2. Create an actual design based on the mockup
  3. Code the design/copy changes
  4. Perform a quality assurance check and do a dry run before the test is live

All this requires time and effort by a team of people, and some of the steps cannot even begin before the previous ones are complete. This is your operational limitation.

You can overcome operational limitations by either hiring more people or limiting the number of tests you run.

Adjust Testing for Outside Influences

While it would be great if every experiment happened in a vacuum, this just isn’t the case. Website experiments performed for the purposes of conversion optimization will never enjoy the controlled environment of scientific experiments — where the experimenter can maintain control on all other influences outside of the one being intentionally changed.

However, we can at least account for obvious or expected test influences, such as holidays that affect the shopping habits of our customers or other predictable events that may change buyer behavior. By taking these factors into account when framing your plan, you can adjust for this and run the experiments at a time when the risk of outside influence is smaller.

Even More Benefits of Creating a Testing Plan

Having a testing plan not only makes your CRO process faster and more effective — it has a number of important additional benefits.

Let’s start with the benefit that’s most important in the long run. A test plan structures and standardizes your approach, making it repeatable and predictable.

An active, structured testing process with no expiry date essentially creates a positive feedback loop, so that even when your testing plan reaches its conclusion, you’ll feel encouraged to seek new challenges and run more tests.

In the long run, this leads to the establishment of a bona fide testing culture within your organization.

A structured process also allows for better feedback on the results. At each phase’s conclusion, you can review the results, update your expectations for the next phase, or adjust experiments that failed in the previous phase. In effect, you’re “learning as you go”.

Finally, a testing plan just plain-and-simple allows for better reporting and makes a more persuasive case for conversion optimization as an organizational must. If you are able to report progress in monthly increments, with results clearly attributed to experiments (which were built on hypotheses, which were derived from research), you’re much more likely to gain organizational support for your CRO program.

A testing plan creates clear milestones and enables the research team to accurately track progress, plan future activities, and remove potential bottlenecks in deploying and implementing experiments. That way, the chance that the testing process may spiral out of control is completely sidestepped, and each team member’s role is clear.

How to Structure Your Testing Plan

We’ve just explored why you need to make a testing plan prior to actual testing — let’s call that step zero, if you will. Now let’s talk about the nuts and bolts of creating that plan.

First, figure out what type of test(s) (A/B test, MVT, or bandit) you’ll run. Test type determines how much traffic you need, as well as the development effort necessary to deploy experiments.

Next, you need to carefully estimate the interdependency of your tests and make adjustments to your priority list if any tests clash with each other.

Finally, to determine the number of experiments you can run, estimate how many you can effectively support with available staff. Take into account that you need to have researchers framing hypotheses, designers and front-end developers to create variations and setup the experiment itself. Since each of these groups will have a number of tasks to attend to, you need to make sure you run only so many tests that your staff can support.

To ensure this, start by going through your list of hypotheses. If you prioritize tests accurately according to the effort necessary to deploy them, you’ll already have many of the inputs for your test plan.

Ultimately, your testing plan should take the form of Gantt charts, which are very helpful in indicating the time frame for each test phase.

A test program is usually presented in the form of a Gantt chart

A “test phase” contains all the tests that can be run simultaneously. For example, if you discover you can run four tests simultaneously, and you have 22 tests to run based on your hypotheses, you’ll have 5 test phases.

Your test plan should also list every proposed test and provide the following concise information for each:

  • Related hypothesis (the “why” of the test)
  • Required sample size
  • Expected effect
  • Who will be the subject (target segment or audience)
  • Where it will run (URL of the page)
  • When (the time period in which it will run)
  • Rough description of changes (the “what” of the test)
  • How to measure success (what metrics the experiment should improve/affect to be considered a success)

If you structure your testing plan this way, you will maximize your test velocity and allow for maximum efficiency of your optimization program.

How to Prioritize and Assign Testing Tasks

Once you create and structure a plan, the only remaining ingredient necessary for success is to actually run through the process.

Obviously, both to secure the greatest possible revenue and to create initial confidence, the first tests you run should be those you expect to have the greatest effect. Select the hypotheses that have high importance (for example, issues that affect your users’ movement through the funnel); that you are most confident will work; and that require the least effort to implement.

You can choose a prioritization model to apply to hypotheses during the research process. Apply the model properly and if your estimates are correct, you will almost certainly get the results you’re looking for.

For each experiment to succeed, you need to translate hypothetical solutions into practical web page designs as accurately as you can.

When you have a mental image of the variation you want to test, translate that into a visual image using a wireframe or mockup. Hand that off to your designers, who can turn it into an actual web page.

While the visual design is being prepared, your front-end developers need to check if any additional coding will be necessary to implement the variation.

The most important part of implementing an experiment is to ensure that it’s set up free of any technical issues. Do this by making quality-assurance protocols and checks part of your testing program.

Once a given step in the experiment development cycle is complete, staff involved with that step can immediately start working on the following experiment. Having a plan enables them to advance further without any delay, and adds to the efficiency of your conversion optimization effort.

Establishing a Culture of Experimentation

Building a testing culture is the main objective of a structured CRO process. A testing culture requires the company to make a switch from a risk-averse and slow-decision-making mindset to a faster, risk-taking approach. This is possible because testing enables you to make decisions based on measurable, known quantities — in effect reducing your risk.

Extensive research is a necessary prerequisite of successful A/B testing (which is something that hopefully, a majority of people involved in testing already understand)! Suffice it to say that the role of research is well publicized, and there are a number of articles about it.

We will also assume that by now, you know how to frame a hypothesis from this research. The hypothesis creation process is just as important to the ultimate success of your CRO effort as running the tests themselves. Only properly framed, strong hypotheses will result in conclusive A/B tests.

In a structured CRO effort, no element should be left to chance. Extend the same careful treatment to actual testing as you afford to research and hypothesis creation. Once you’ve properly prioritized your hypotheses by the effort each will take, their importance, and their expected effect, you need to prepare your tests with the same forethought.

How you approach setting up your testing program will greatly impact your end results. The aim of every good testing program is to attain the maximum test velocity and see meaningful test results in the shortest possible time.

About the Author: Edin Šabanović is a senior CRO consultant working for Objeqt. He helps e-commerce stores improve their conversion rates through analytics, scientific research, and A/B testing. Edin is passionate about analytics and conversion rate optimization, but for fun, he likes reading history books. He can help you grow your ecommerce business using Objeqt’s tailored, data-driven CRO methodology. Get in touch if you want someone to take care of your CRO efforts.



from The Kissmetrics Marketing Blog http://ift.tt/2yjRJkt
via IFTTT