SES Our Coverage
Year:
2021
Duration:
1 month
My Role:
UX & UI Design
The Team:
Project Manager, UX Researcher, Front-End, Back-End and QA developers
👇  Have a play with it live. Please note all features and reviews listed below haven’t yet been fully pushed live.

See the live tool

Context
SES is one of the biggest global satellite operator, connecting broadcast, telecom, corporate and governments customers worldwide. It operates over 70 satellites in two different orbits, GEO (geostationary orbit) and MEO (medium earth orbit).

SES partnered with Catch Digital in 2016 for a complete redesign of their digital platform, and they partnered again in 2018 to introduce Our Coverage, a mostly B2B tool allowing to explore SES’s satellite fleet, coverage and ground infrastructure. In August 2021, I was tasked to take over the project’s second phase and look at usability improvements after almost 2 years of the tool being live.

Process/Approach

🎯
Understand the problem
What is the Challenge ? What are the friction points ?
💡
Ideate
Crate task flows, letting ideas out
🖍
Design
Wireframing, UI Delivery
🕹
Test & Review
A/B and gorilla testing, reviews with team and client
🔑
Handover
Prototyping, Communication with Front-End and Back-End engineers
The challenge
The goal of this project was to increase the product usability, with the key performance indicator being the average time spent on the tool.
We first started by running user-testing sessions SES customers, in order to validate or invalidate our assumptions about the tool’s friction points and usability issues.
📌
Our user-testing were done in sessions of approximatively 30 minutes, using Lookback. We exchanged with 10 of SES customers, with experience working in the telecommunications' industry. They were a mix of male and female and the age range was between 26 - 50 years old.

This allowed us to center our effort around 4 main pain points :
📒
A complex tool with very specific information and language that can feel overwelming when first using it
🌐
Results organized by orbital position makes the tool confusing for most users
🛰
A lack of satellite comparison makes it hard for the user to choose the right one
📱
Some real usability issues when using the tool on mobile

📒 Better user guidance/onboarding with a new virtual tour

Our Coverage is a very rich tool, displaying complex information, with very specific language (which isn’t universal through all countries). The first pain point we observed during our initial user-testing sessions was about users potentially feeling overwhelmed on their first interaction with the tool.
💡
When asked on their ability to use the tool straight away to find information, about 50% of users felt lost and unaccompanied, although they were able to find the information they were looking for eventually.

Users that didn’t have a complete understanding of satellites and orbits didn’t feel guided at all, and had trouble understanding most of the terms displayed. This was also true for some people familiar with satellites but used to other terms in different languages.

Based on those inputs, I started thinking about ways to better guide the user about the different functionalities available with the tool, and decided to introduce 2 new features :

I was able to A/B test these different mockups with some users that knew very little about satellites, as well as some previous interviewees that were used to the tool, and went away with the following inputs:

This was highly valuable as it allowed to clear my concerns about the potential invasiveness of a virtual tour launched directly on load. Feedback around the tooltip was hugely positive and comforted me in that approach.

🌐 Results organized by orbital position tend to be confusing

Since its launch, Our Coverage results have been organized by orbital position, with satellites listed within each.

This is an approach i suspected to be a major pain point, as the final results people were looking for using this tool were satellites.
💡
In our user-testing sessions, although all users were able to find a satellite option that related to their search, more than 70% of the users found the cards to be confusing as they were expecting results to be shown as a list of satellites, which confirmed my initial assumption. Navigating results by orbital position felt unnatural and frustrationg to the users.

With those new inputs and the validation of the back-end developers on this project, I decided to redesign the results cards, highlighting satellites instead of hiding them under orbital positions. I introduced some new icons as well after further user-testing proved that names alone (which can be quite abstract for most people) couldn’t be enough on their own to differenciate a satellite from other entities.
5°E
Sub-Saharan Africa, Europe, Atlantic Ocean Region, North America, Latin America & the Caribbean
ASTRA 4A
Select a footprint to view on the 3D globe or learn more about the individual satellite.
Details and 2D Footprint
Old card design: Orbit first
ASTRA 4A
ASTRA 4A is a Ku-band satellite, capable of serving a range of applications, including VSAT networks, rural telecommunications, pay-TV, mobile broadband, GSM backhaul, maritime and e-learning.
Details and 2D Footprint
New card design: Satellite first
Old card design: Orbit first
New card design: Satellite first

🛰 Ability to compare Satellites

As a natural follow-up to organizing results by satellites, something that often came up when discussing with some of SES clients using the tool was the need to easily be able to compare different satellites and their features, especially all their different footprints. One of the requirements that came out though these talks as well as discussions with the client was the ability to share this comparison “sheet” within a company without having to necessarily got through the main tool and globe exploration.To solve this, we decided to include this comparison tool inside the newly designed satellite cards, by adding a “Add to Comparison List” CTA on each card. This allows users to navigate through different locations, selecting up to 4 satellites and opening the comparison sheet in a new tab, without closing or obstructing the main globe tool. The feature would then be added to the virtual tour to inform the user of its functionality.

I decided to test this feature with these newly mocked up designs to validate a few points :
The overall response was very positive. All users were able to easily access the comparison tool and complete the tasks they were given with no friction points. The understanding of the feature was very good at all steps, and the different CTAs made actually no real impact to the user’s journey, so I decided to stay with the first iteration for better consistency with the rest of the website, while still having the tool open up in a new page.

2 users raised a desire for being able to re-organize cards within the comparison tool, which is something I would have liked to have implemented, but unfortunately was not possible due to budget considerations.

📱 Improving usability on Mobile

The last major pain point I wanted to adress in this phase 2 was the mobile experience. The tool being originally primarily designed for tablet and desktop, navigating it through mobile felt very unsettling and frustrating.
💡
Looking at analytics, it came out that almost half of users would access the tool on mobile, with an average session duration much lower than on Desktop.

I started by looking at users interactions with the mobile tool, and while completing tasks was mostly as successful as on Desktop,  the overall experience was rated much lower, with a very poor interface :
I started by listing the different friction points that we identified:

To solve those points, I decided to completely redesign the mobile view, with filters displayed to match the desktop view, as well as two tabs at the bottom allowing to switch easily between the 3D globe and results.
There are a couple of features I wasn’t able to introduce on mobile due to externals factors, such as the virtual tour and satellite comparison, so we decided for now to specify the full experience was available on desktop, under the help icon where the virtual tour would normally sit.

Results

Overall, the testing and analytics we were able to run were massively positive, with a great improvement in the rating of the experience by users, as well as an increase of the average session duration by more than 81% on desktop and 72% on mobile since implementation.

Takeaways & what I would do differently

I’m very happy to have been part of this project and lead the improvement of the tool despite a reduced timeline. The tool experience has been improved greatly from its first release, and some of those implementations are on the way.

One thing I would try to improve if I had the opportunity would be to make more room for design reviews of the live tool once designs have been implemented in front-end : some features don’t match the designs perfectly and that makes it inconsistent at places in terms of fonts, paddings, or hover states that stayed inherited from the older design system.

Have a look at other projects 👇