This page demonstrates how you can use Publist's
reference mode. You can insert in your page simple PHP commands that look very similar to
LaTeX/BibTeX's \cite commands.
For example, adding this command: cite("atikoglu12:worklod") would create a bracketed first reference,
like here: .
You can then add more cite commands, even with multiple comma-seperated reference keys, like
which will appear as [2, 3].
If you repeat a previous key like in this example:
cite("cirne12:websched,feitelson13:devops , berezecki11:manycore "),
publist will use the right citation number, as shown here:
[2, 4, 5].
Evenutally, when you wish to show the list of references, you simply call another function,
print_refs(), which shows below:
Berk Atikoglu, Yuehai Xu, Eitan Frachtenberg, Song Jiang and Mike Paleczny.
Workload Analysis of a Large-Scale Key-Value Store,
In 12th ACM/IFIP Joint Conference on Measurement and Modeling of Computer Systems SIGMETRICS/Performance'12,
Archival version in Performance Evaluation Review vol. 40(1): 53--64.
Key-value stores are a vital component in many scale-out enterprises,
including social networks, online retail, and risk analysis. Accordingly,
they are receiving increased attention from the research community
in an effort to improve their performance, scalability, reliability,
cost, and power consumption. To be effective, such efforts require
a detailed understanding of realistic key-value workloads. And yet
little is known about these workloads outside of the companies that
operate them. This paper aims to address this gap.
To this end, we have collected detailed traces from Facebook's Memcached
deployment, arguably the world's largest. The traces capture over
284 billion requests from five different Memcached use cases over
several days. We analyze the workloads from multiple angles, including:
request composition, size, and rate; cache efficacy; temporal patterns;
and application use cases. We also propose a simple model of the most
representative trace to enable the generation of more realistic synthetic
workloads by the community.
Our analysis details many characteristics of the caching workload.
It also reveals a number of surprises: a GET/SET ratio of 30:1 that
is higher than assumed in the literature; some applications of Memcached
behave more like persistent storage than a cache; strong locality
metrics, such as keys accessed many millions of times a day, do not
always suffice for a high hit rate; and there is still room for efficiency
and hit rate improvements in Memcached's implementation. Toward the
last point, we make several suggestions that address the exposed deficiencies.
Walfredo Cirne and Eitan Frachtenberg.
Web-Scale Job Scheduling,
In Job Scheduling Strategies for Parallel Processing, Lecture Notes in Computer Science
Web datacenters and clusters can be larger than the world's largest
supercomputers, and run workloads that are at least as heterogeneous
and complex as their high-performance computing counterparts. And
yet little is known about the unique job scheduling challenges
of these environments. This article aims to ameliorate this situation.
It discusses the challenges of running web infrastructure and describes several
techniques to address them. It also presents
some of the problems that remain open in the field.
Yuehai Xu, Eitan Frachtenberg and Song Jiang.
Building a High-Performance Key-Value Cache as an Energy-Efficient Appliance,
In 32nd International Symposium on Computer Performance, Modeling, Measurements, and Evaluation Performance'14,
Best Student Paper Award.
Archival version in Performance Evaluation vol. 79: 24--37.
Key-value (KV) stores have become a critical infrastructure
component supporting various services in the cloud. Long considered
an application that is memory-bound and network-bound, recent
KV-store implementations on multicore servers grow increasingly
CPU-bound instead. This limitation often leads to under-utilization
of available bandwidth and poor energy efficiency, as well as long
response times under heavy load. To address these issues, we present
Hippos, a high-throughput, low-latency, and energy-efficient
key-value store implementation. Hippos moves the KV store into
the operating system's kernel and thus removes most of the overhead
associated with the network stack and system calls. Hippos
uses the Netfilter framework to quickly handle UDP packets,
removing the overhead of UDP-based GET requests almost entirely.
Combined with lock-free multithreaded data access, Hippos
removes several performance bottlenecks both internal and external
to the KV-store application.
We prototyped Hippos as a Linux loadable kernel module and
evaluated it against the ubiquitous Memcached using various micro-benchmarks
and workloads from Facebook's production systems. The experiments
show that Hippos provides some 20--200% throughput improvements on
a 1Gbps network (up to 590% improvement on a 10Gbps network) and
5--20% saving of power compared with Memcached.
Dror G. Feitelson, Eitan Frachtenberg and Kent L. Beck.
Development and Deployment at Facebook. In
IEEE Internet Computing 17
Mateusz Berezecki, Eitan Frachtenberg, Mike Paleczny and Kenneth Steele.
Many-Core Key-Value Store,
In Second International Green Computing Conference IGCC'11,
Scaling data centers to handle task-parallel workloads requires
balancing the cost of hardware, operations, and power.
Low-power, low-core-count servers reduce costs in
one of these dimensions, but may require additional nodes to
provide the required quality of service or increase costs by
underutilizing memory and other resources.
We show that the throughput, response time, and power
consumption of a high-core-count processor operating at a low
clock rate and very low power consumption can perform well
when compared to a platform using faster but fewer commodity
cores. Specific measurements are made for a key-value store,
Memcached, using a variety of systems based on three different
processors: the 4-core Intel Xeon L5520, 8-core AMD Opteron
6128 HE, and 64-core Tilera TILEPro64.