Scaling zulily’s Infrastructure in a Pinch, with Salt

At zulily, we strive to delight our customers with the best possible experience, every day.  Our daily customer experience involves offering thousands of new products each morning, all of which comes together thanks to our technology, and impeccable coordination across the organization. As our product offerings dramatically change on a daily basis, quickly scaling our infrastructure to meet variable demand is of critical importance.  In this article, we will provide an overview of zulily’s SaltStack implementation and its role in our infrastructure management, exploring patterns and practices which enhance our automation capabilities.

 

Let’s start with a bit of context, and not the jinja kind

Our technology team embraces a DevOps approach to solving technical challenges, and many of our engineers are “full stack”. We have several product teams developing and supporting both external and internal services, with a variety of application stacks.  All product teams have developers of course, and a few have dedicated DevOps engineers.  We also have a small, dedicated infrastructure team.

 

zulily has seen phenomenal growth since it’s inception, and what was initially a tech team of one, quickly became a tech team of a few, rapidly evolving into a tech team of many product teams and engineers, which is where we find ourselves today.  With these changes and growth over time, it became apparent our infrastructure team was perhaps not the ideal team for managing all components and configurations across the entire Technology organization.

 

To elaborate further on this point, our product teams have overlapping stacks but with variations, and many teams have vastly different components comprising their stacks.  Product teams know their application stacks best, so instead of attempting to have a small team of infrastructure engineers managing all configs and components, we needed to empower product teams to be able to take ownership, by providing them with self-service options.

 

Enter SaltStack to address our organization growth, which we have found to be very approachable, with its simple-to-grasp state and pillar tree layouts, use of yaml, and customization possibilities with python.  Salt is a key component in our technology stack enabling our product teams to take control of their system configurations, keeping us moving forward quickly and accomplishing our goals.

 

saltenv == tenant (mostly), and baseless?

Like many initiatives and projects at zulily, we’ve taken a unique approach to our use of salt environments. It has worked out exceptionally well for our tech organization and we are excited to share our approach to multi-tenancy with salt.

 

Each product team has its own salt and pillar trees, salt environments map to tenants essentially. For example, we have environments with names such as “site”, we do not use salt environment names such as “dev” and “prod”.

 

But what about “real” environments?  We are able to manage those too, thanks to our strict and metadata-rich host-naming convention, paired with salt’s state and pillar tree layouts and top.sls and targeting capabilities.  Our hostnames have the following format:

 

<product_team>-<function>-<node_number>.<location_code>.<environment>.zulily.com

 

Also related to our host names, each minion has custom grains set for all of these fields, and these grains are quite useful in many of our states!

 

We have found that the majority of states are the same across (real) environments, and environment specifics can instead be managed through pillar targeting.  By keeping all of a team’s states and pillar data within just two git repositories, we have found we are overall more DRY than we would have been with separate git repositories (per real environment).

 

Additionally, salt states may be extended and overridden, which may be useful for different (real) environments when necessary.  So instead of having a flat state tree, we have sub-directories such as ‘core’, ‘dev’ and ‘prod’.  Our approach is to place just about everything under core, but use environment sub directories when we must have environment-specific states, or when we simply wish to extend or override states residing in core. If parent states in core must be modified, it is important to consider the ramifications for any environment-specific children.  We generally don’t do a lot of extending and overriding at zulily, and instead focus on placing environment specifics within targeted pillar data, as previously mentioned.

 

We have the same layout in our pillar trees for consistency, but note that pillar keys must be unique and have no hierarchy when retrieved, however, hierarchy is important for pillar top.sls targeting!

 

Reviewing the following state tree example illustrates our layout approach for a “provision environment”:

 

│── core
│   │── aliases
│   │  │── files
│   │  │   └── aliases
│   │  │── init.sls
│   │  │── map.jinja
│── dev

│── prod
│── top.sls

 

But wait, if a highstate is run, what happens and couldn’t this be dangerous?  Running a highstate does have the potential to be dangerous, if a product team accidentally targets *their* very specific MySQL states to ‘*’ for example, a separate team’s database server could result in a serious outage.  To mitigate the risk of an incident such as this occurring, pushes to all of our state and pillar repositories are subject to inspection by a git push constraint that deserializes the top.sls yaml and evaluates all targets.  The targeting allowed in our top.sls files is very restrictive, with only a subset of target types allowed, and non-relevant environment references are disallowed.  Also worth noting is that only very specific, authorized team members have write access to our salt and pillar product team repositories, a member of the site team may not write to the infrastructure team’s salt and pillar repositories.

 

Also worth mentioning, one additional layer of risk mitigation we have in place is that all of our users append “saltenv=<product_team>” to their salt-calls, always.
We do have additional environments which are not-tied to any specific project team, known as base, provision and periodic.  The base environment is empty!  The latter two are critical to our operations, we’ll explain this next.

 

Less salt (highstate runs)

In our experience at zulily, we’ve learned that the vast majority of our salt states really only need to run just once, or rather infrequently.  So our standard practice for product teams is to run highstates only once per week  or on an as-needed basis, which we do very cautiously.  It goes against the traditional wisdom of converging at least hourly, but in the end, we have had consistent environments and greater stability with this approach.  It is nearly inevitable that even the most senior automation engineer will make a bad push to master at some point, and a timed hourly run could pick up on that, with potentially disastrous consequences.  Configuration management is a powerful thing, and we have found our approach to highstating to be the appropriate balance for zulily.

 

Now, getting to zulily’s two important non-product team “environments”…

 

The first of which is known as “provision”.  States in the provision environment provide the most basic packages and configurations with reasonable defaults, which work for most product teams, most of the time.  What is very particular about the provision environment is that a “provision highstate” is only run once!  That’s correct we almost never re-run any of these states once an instance goes into production.  There just really isn’t a need, and more importantly, there may be conflicts with subsequent customizations by product teams, and we would really rather avoid unnecessary subsequent configuration breakage.

 

To limit ourselves to a single provision hightstate, our provision top.sls targeting requires that a grain be set to True, known as “in_provisioning”.  When an instance has been provisioned, we remove the grain — a provision highstate will never run again, as long as the grain remains absent.  Very seldom, we have had to roll out updates to a few individual states within provision, which we accomplish very cautiously with specific states.sls jobs.

 

We have recently open sourced a sampling of many of our basic states used at provision time, please have a look at our github project known as alkali.

 

The second non-product team “environment” is known as periodic.  While our standard is to run a full product team environment highstate once per week, some changes need to get out in near realtime.  For zulily, these types of changes are limit to states addressing resources such as posix users and groups, sudoers, iptables rules, and ssh key management.  Periodic highstates are cron’d every few minutes at present with saltenv=periodic of course.  We are however moving to triggered periodic highstates, as cron’d periodic highstate runs may block other jobs.

 

State development workflow

We have done a significant amount of state develop at zulily, and for the most part, this has occurred within Vagrant environments.  Vagrant has worked very well for us, but more recently we are beginning to leverage docker containers for this purpose.  For more information on how we are doing this, please check out a project we just released, known as buoyant.

 

Given our salt development environment, whether Vagrant or docker, we typically iterate on states working out of our home directories (synced folders or docker volumes), preferably in a branch.  Once state and pillar files are ready, we merge into master and configure very restrictive and precise targeting at first, or simply remove or disable existing targeting.  This gives us full control over our rollout process across (real) environments, which limits the risk of a service disruption, we know exactly which hosts are executing which states and when.

 

Pushes to master branches for all salt and pillar git repositories are integrated within just a few minutes with our current automation, and then ready for targeted execution across relevant minions.

 

zulily’s salt masters are controlled by a centralized infrastructure team, and product teams are restricted from running “salt” commands, they do not have access to our masters.  They do however have all the control, and only the control they need! Product teams use simple, custom scripts that leverage fabric to execute remote commands on their minions, most notably salt-call (with saltenv specified of course!).

 

Other salt-related open source projects zulily has released

Outside of the aforementioned alkali and buoyant projects, we have recently released four community formulas:

 

 

All of these projects are in their early stages, a bit heavy on the jinja in some cases, and very Ubuntu-specific for the most part at this time. They have however shown good promise for us at zulily, we didn’t want to wait any longer to share them with the community. Our hope is they will already be useful to some, and worthy of iterating on going forward.

 

Coloring outside of the lines

One of zulily’s core values is to “color outside of the lines,” and our use of SaltStack is no exception.  Many of the patterns we use are uncommon, and our approach to environments in particular may not be the first idea that comes to mind for the typical salt user.  Our use of salt and its inherent simplicity and flexiblity have enabled us to decentralize our configuration management, providing multi-tenancy and product team isolation.  With self-service capabilities in place, our product teams are empowered to move at a quick cadence, keeping pace with what we call “zulily time” around the office. We’ve had great success with SaltStack at zulily, and we are pleased to share some of our projects and patterns with the community.

 

Happy salting!

zulily’s Kubernetes launch presentation at OSCON

In July Steve Reed from zulily presented at O’Reilly’s Open Source Convention (OSCON). He spoke to zulily’s pre-launch experience with Kubernetes. It was an honor for zulily to be asked to speak as part of the Kubernetes customer showcase, given the success we have had with Kubernetes.

Kubernetes launch announcement: http://googlecloudplatform.blogspot.com/2015/07/Kubernetes-V1-Released.html

Sampling keys in a Redis cluster

We love Redis here at zulily. We store hundreds of millions of keys across many Redis instances, and we built our own internal distributed cache on top of Redis which powers the shopping experience for zulily customers.

One challenge when running a large, distributed cache using Redis (or many other key/value stores for that matter) is the opaque nature of the key spaces. It can be difficult to determine the overall composition of your Redis dataset, since most Redis commands operate on a single key. This is especially true when multiple codebases or teams use the same Redis instance(s), or when sharding your dataset over a large number of Redis instances.

Today, we’re open sourcing a Go package that we wrote to help with that task: reckon.

reckon enables us to periodically sample random keys from Redis instances across our fleet, aggregate statistics about the data contained in them — and then produce basic reports and metrics.

While there are some existing solutions for sampling a Redis key space, the reckon package has a few advantages:

Programmatic access to sampling results

Results from reckon are returned in data structures, not just printed to stdout or a file. This is what allows a user of reckon to sample data across a cluster of redis instances and merge the results to get an overall picture of the keyspaces. We include some example code to do just that.

Arbitrary aggregation based on key and redis data type

reckon also allows you to define arbitrary buckets based on the name of the sampled key and/or the Redis data type (hash, set, list, etc.). During sampling, reckon compiles statistics about the various redis data types, and aggregates those statistics according to the buckets you defined.

Any type that implements the Aggregator interface can instruct reckon about how to group the Redis keys that it samples. This is best illustrated with some simple examples:

To aggregate only Redis sets whose keys start with the letter a:


func setsThatStartWithA(key string, valueType reckon.ValueType) []string {
  if strings.HasPrefix(key, "a") && valueType == reckon.TypeSet {
    return []string{"setsThatStartWithA"}
  }
  return []string{}
}

To aggregate sampled keys of any Redis data type that are longer than 80 characters:


func longKeys(key string, valueType reckon.ValueType) []string {
  if len(key) > 80 {
    return []string{"long-keys"}
  }
  return []string{}
}

HTML and plain-text reports

When you’re done sampling, aggregating and/or combining the results produced by reckon you can easily produce a report of the findings in either plain-text or static HTML. An example HTML report is shown below:

reckon-random-sets

a sample report showing key/value size distributions

The report shows the number of keys sampled, along with some example keys and elements of those keys (the number of example keys/elements is configurable). Additionally, a distribution of the sizes of both the keys and elements is shown — in both standard and “power-of-two” form. The power-of-two form shows a more concise view of the distribution, using a concept borrowed from the original Redis sampler: each row shows a number p, along with the number of keys/elements that are <= p and > p/2

For instance, using the example report shown above, you can see that:

  • 68% of the keys sampled had key lengths between 8 and 16 characters
  • 89.69% of the sets sampled had between 16 and 32 elements
  • the mean number of elements in the sampled sets is 19.7

We have more features and refinements in the works for reckon, but in the meantime, check out the repo on github and let us know what you think. The codebase includes several example binaries to get you started that demonstrate the various usages of the package.

Pull requests are always welcome — and remember: Always be samplin’.

The way we Go(lang)

Here at zulily, Go is increasingly becoming the language of choice for many new projects, from tiny command-line apps to high-volume, distributed services. We love the language and the tooling, and some of us are more than happy to talk your ear off about it. Setting aside the merits and faults of the language design for a moment (over which much digital ink has already been spilled), it’s undeniable that Go provides several capabilities that make a developer’s life much easier when it comes to building and deploying software: static binaries and (extremely) fast compilation.

What makes a good build?

In general, the ideal software build should be:

  • fast
  • predictable
  • repeatable

Being fast allows developers to quickly iterate through the develop/build/test cycle, and predictable/repeatable builds allow for confidence when shipping new code to production, rolling back to a prior version or attempting to reproduce bugs.

Fast builds are provided by the Go compiler, which was designed such that:

It is possible to compile a large Go program in a few seconds on a single computer.

(There’s much more to be said on that topic in this interesting talk.)

We accomplish predictable and repeatable builds using a somewhat unconventional build tool: a Docker container.

Docker container as “build server”

Many developers use a remote build server or CI server in order to achieve predictable, repeatable builds. This makes intuitive sense, as the configuration and software on a build server can be carefully managed and controlled. Developer workstation setups become irrelevant since all builds happen on a remote machine. However, if you’ve spent any time around Docker containers, you know that a container can easily provide the same thing: a hermetically sealed, controlled environment in which to build your software, regardless of the software and configuration that exist outside the container.

By building our Go binaries using a Docker container, we reap the same benefits of a remote build server, and retain the speed and short dev/build/test cycle that makes working with Go so productive.

Our build container:

  • uses a known, pinned version of Go (v1.4.2 at the time of writing)
  • compiles binaries as true static binaries, with no cgo or dynamically-linked networking packages
  • uses vendored dependencies provided by godep
  • versions the binary with the latest git SHA in the source repo

This means that our builds stay consistent regardless of which version of Go is installed on a developer’s workstation or which Go packages happen to be on their $GOPATH! It doesn’t matter if the developer has godep or golint installed, whether they’re running an old version of Go, the latest stable version of Go or even a bleeding-edge build from source!

Git SHA as version number

godep is becoming a de facto standard for managing dependencies in Go projects, and vendoring (aka copying code into your project’s source tree) is the suggested way to produce repeatable Go builds. Godep vendors dependent code and keeps track of the git SHA for each dependency. We liked this approach, and decided to use git SHAs as versions for our binaries.

We accomplish this by “stamping” each of our binaries with the latest git SHA during the build process, using the ldflags option of the Go linker. For example:

ldflags "-X main.BuildSHA ${GIT_SHA}"

This little gem sets the value of the BuildSHA variable in the main package to be the value of the GIT_SHA environment variable (which we set to the latest git SHA in the current repo). This means that the following Go code, when built using the above technique, will print the latest git SHA in its source repo:

package main

import "fmt"

var BuildSHA string // set by the compiler at build time!

func main() {
  fmt.Printf("I'm running version: %s\n", BuildSHA)
}

Enter: boilerplate

Today, we’re open sourcing a simple project that we use for “bootstrapping” a new Go project that accomplishes all of the above. Enter: boilerplate

Boilerplate can be used to quickly set up a new Go project that includes:

  • a Docker container for performing Go builds as described above
  • a Makefile for building/testing/linting/etc. (because make is all you need)
  • a simple Dockerfile that uses the compiled binary as the container’s entrypoint
  • basic .gitignore and .dockerignore files

It even stubs out a Go source file for your binary’s main package.

You can find boilerplate on github. The project’s README includes some quick examples, as well as more details about the generated project.

Now, go forth and build! (pun intended)

Google Compute Engine Hadoop clusters with zdutil

Here at zulily, we use Google Compute Engine (GCE) for running our Hadoop clusters. Google has a utility called bdutil for setting up and tearing down Hadoop clusters on GCE. We ran into a number of issues when using the utility and were using an internally patched version of it to create our Hadoop clusters. If you look at the source, bdutil is essentially a collection of bash scripts that automate the various steps of creating a GCE instance and provisioning it with all the necessary software needed to run Hadoop. One major issue we found with bdutil was that there is no way to provision a Hadoop cluster where the datanodes do not have external IP addresses. For clusters with many datanodes — the kind we typically run — this means we end up running against our quota of external IP addresses. Additionally, there is no reason for the datanodes to have external IP addresses as they should not be accessible to the public.

We decided to stop patching bdutil and write our own utility to provision a Hadoop cluster. The utility is called zdutil and you can find it on our GitHub page. Here’s how it works:

  • First, GCE instances are created for the namenode and all datanodes in your Hadoop cluster.
  • Then, any persistent disks that you requested are created and attached to the instances.
  • If you have have any tags that you would like to be applied to the namenode or datanodes, the tags are added to the instances. This saves you from having to manually tag every single instance in your cluster or write your own script to do so.
  • Next, all of the required setup scripts to provision the namenode and datanodes are copied to a GCS bucket of your choosing. The namenode then provisions itself.
  • Once it completes, it copies (via scp) all scripts needed for datanode provisioning to each datanode and then each datanode will provision itself.
  • Once all datanodes have been provisioned, the namenode will start the Hadoop cluster.

If you deploy the datanodes with either external or ephemeral IP addresses, they will have internet access as determined by the rules of your GCE network. If you deploy the datanodes with “none” for the IP address, they will proxy through the namenode using Squid. You don’t have to configure any of this yourself; zdutil will take care of the details for you, including installing and provisioning Squid on your namenode. It is also important to be aware that Google’s version of the Google Cloud Storage Connector currently does not support proxying. If you use zdutil, it will install our fork of the GCS Connector which does support proxying by adding the following properties to your Hadoop core-site.xml configuration file: fs.gs.proxy.host and fs.gs.proxy.port.

If you have any need for zdutil, please use it and give us your feedback. At the moment we only support Debian-based images and we only support Hadoop version 1. If you would like to see another OS supported or Yarn support, please add an issue to the GitHub page.

 

Optimizing memory consumption of Radix Trees in Java

On the Relevancy team at zulily, we are often required to load a large number of large strings into memory. This often causes memory issues. After looking at multiple ways to reduce memory pressure, we settled on Radix Trees to store these strings. Radix Trees provide very fast prefix searching and are great for auto-complete services and similar uses. This post focuses entirely on memory consumption.

What Is A Radix Tree?

Radix Trees take sequences of data and organize them in a tree structure. Strings with common prefixes end up sharing nodes toward the top of this structure, which is how memory savings is realized. Consider the following example, where we store “antidisestablishmentarian” and “antidisestablishmentarianism” in a Radix Tree:

+- antidisestablishmentarian (node 1)
                           +- ism (node 2)

Two strings, totaling 53 characters, can be stored as two nodes in a tree. The first node stores the common prefix (25 characters) between it and its children. The second stores the rest (3 characters). In terms of character data stored, the Radix Tree stores the same information in approximately 53% of the space (not counting the additional overhead introduced by the tree structure itself).

If you add the string “antibacterial” to the tree, you need to break apart node 1 and shuffle things around. You end with:

+- anti                             (node 3)
      |- disestablishmentarian      (node 4)
      |                      +- ism (node 2)
      +- bacterial                  (node 5)


Real-World Performance

We run a lot of software in the JVM, where memory performance can be tricky to measure. In order to validate our Radix Tree implementation and measure the impact, I pumped a bunch of pseudo-realistic data into various collections and captured memory snapshots with YourKit Java Profiler.

Input Data

It didn’t take long to hack together some real-looking data in Ruby with Faker. I created four input files of approximately 1,000,000 strings that included a random selection of 12-digit numbers, bitcoin addresses, email addresses and ISBNs.

sreed:src/ $ head zulily-oss/radix-tree/12-digit-numbers.txt
141273396879
414492487489
353513537462
511391464467
633249176834
347155664352
632411507158
752672544343
483117282483
211673267195

sreed:src/ $ head zulily-oss/radix-tree/bitcoins.txt
1Mp85mezCtBXZDVHGSTn3NYZuriwRMmW6D
1N8ziuitNLmSnaXy2psYpLcXvugHw1Yc5s
18DnruBzLHmnVHQhDghoa6eDt6sDkfuWKr
1A3sRfAnP89HE4RgNQARa3kCq4xFEF9eev
12WR4DrsR4mM8gDHZCuqXe2h37VUSUPSNu
1PRmYuevwZXZamBEgANzLXe2SjFneGDsXp
1EpjPwt8Ap47XA6HwJhCTxUZRDH11GKWuQ
1P8MAgobhLw4FYcFHbw7a8t2FvQZg8K597
15xhiiLdkin8zi6S5KL9DkDDQyvLb1pjjT
1NPEZeEjgGu5TYdz5d3kxjVfLwxAZ2fK6f

sreed:src/ $ head zulily-oss/radix-tree/emails.txt
jakayla.hoppe@krajcikpollich.info
abbey.goodwin@tromp.org
laney.dach@walkerlubowitz.biz
rosanna_towne@marks.name
sherwood@oberbrunnerauer.name
mohamed_rice@champlin.com
margaret_kirlin@greenfeldercasper.net
vince@funk.net
leora_ohara@hackett.biz
audra.hermann@bauch.org

sreed:src/ $ head zulily-oss/radix-tree/isbns.txt
216962073-7
640524955-7
955360834-5
429656067-0
605437693-4
204030847-4
037410069-1
239193083-6
182539755-4
034988227-4

Measuring Memory with YourKit

YourKit provides a measurement of “retained size” in its memory snapshots which is helpful when trying to understand how your code is impacting the heap. What isn’t necessarily intuitive about it, though, is what objects it excludes from this “retained size” measurement. Their documentation is very helpful here: only object references that are exclusively held by the object you’re measuring will be included. Instead of telling you “this is how much memory usage your object imposes on the VM,” retained size instead tells you “this is how much memory the VM would be able to garbage-collect if it were gone.” This is a subtle, but very real, difference if you wish to optimize memory consumption.

Thus, my memory testing needed to ensure that each collection held complete copies of the objects I wished to measure. In this case, each string key needed to be duplicated (I decided to intern and share every value I stored in order to measure only the memory gains from different key storage techniques).

// Results in shared reference, and inaccurate measurement
map1.put(key, value);
map2.put(key, value);

// Results in shared char[] reference, and better but
// still inaccurate measurement
map1.put(new String(key), value);
map2.put(new String(key), value);

// Results in complete copy of keys, and accurate measurement
map1.put(new String(key.toCharArray()), value);
map2.put(new String(key.toCharArray()), value);

Collections Tested

I tested our own Radix Tree implementation, ConcurrentRadixTree from https://code.google.com/p/concurrent-trees/, a string array, Guava‘s ImmutableMap and Java’s HashMap, TreeMap, Hashtable and LinkedHashMap. Each collection stored the same values for each key.

Both zulily’s Radix Tree and the ConcurrentRadixTree from concurrent-trees were configured to store string data as UTF-8-encoded byte arrays.

ConcurrentRadixTree was included simply to ensure that our own version (to be open-sourced soon) was worth the effort. The others were measured simply to highlight the benefits of Radix Tree storage for different input types. Each collection has its own merits and in most ways they are all superior to the Radix Tree for storage (put/get performance, concurrency and other features).

Results

radix-tree-memory-2

First of all, Guava’s ImmutableMap is pretty good. It stored the same key and value data as java.util.HashMap in 92-95% of the space. The Radix Tree breaks keys into byte array sequences and stores them in a tree structure based on common prefixes. This resulted in a best case of 62% the size of the ImmutableMap for bitcoin addresses (strings which have many common prefixes) and a worst case 88% for random 12-digit numbers. We see that the memory used by this data structure is largely dependent on the type of data put into it. Large strings with many large common prefixes are stored very efficiently in a narrow tree structure. Unique strings create a lot of branches in the underlying tree, making it very wide and adding a lot of overhead.

Converting Java Strings to byte arrays accounts for most of the memory savings, but not all. Byte array storage was anywhere from 90% (bitcoin addresses) to 99% (ISBNs) in the tests I ran.

For us, storing byte-encoded representations of string data in a radix tree allowed us to reclaim valuable memory in our services. However it wasn’t until validating the implementation in an accurate manner with realistic data and trustworthy tools that we rested easy knowing we had set out what we wished to accomplish.

zulily Open Source

blog-github

Most people do not think of zulily as a tech company, which is understandable. When we have made headlines, it has been for changing the way moms shop online, not for our technological or logistical achievements.

The truth is that zulily would not exist as it does today had we not made major investments in the software that runs our business. Much of this software, while proprietary, was built on a foundation of innumerable open source projects. It is important for us to be able to give back to the community that supports these projects.

Please visit our github page. We are just getting started, but it is here that you can meet and interact with our engineers who are giving back to open source via the projects they are maintaining. Here also you can get an idea of the type of technology we have chosen to rely on. Keep coming back to this blog as we continue to engage with those who share our zeal for e-commerce, the web, mobile, data and the rest of it.

Think of zulily as a tech company: one that is built upon, and enthusiastically supports open source software and the community that has created it.