ECS 265: Distributed Database Systems

Instructor: Mohammad Sadoghi
E-mail: [email protected]
Office: Kemper Hall 3055
Office Hours (Zoom): Fridays 3:00pm - 5:00pm (by appointments)

Teaching Fellow: Wayne Wang
Email: [email protected]
Office Hours (Zoom): Wednesdays 3:00pm - 5:00pm (by appointments)

Lecture Time: MonWedFri 12:10pm - 1:00pm
Location: Teaching and Learning Complex 1214

Final Project Progress Meeting (Discussion): Fri 1:10pm - 2:00pm
Location: Wellman Hall 212
Note: At least one team member must attend the progress meeting every week.


This graduate seminar course surveys the recent developments in data management with the primary focus centered around Blockchain covering topics such as storage architecture, concurrency controls, agreement protocols, byzantine fault-tolerant protocols, secure transactions, distributed ledgers, decentralized and resilient infrastructures, and trusted execution.



Required: (all freely available when accessed from UCD network) ResilientDB Resources:
  • Research Paper - ResilientDB: Global Scale Resilient Blockchain Fabric. S. Gupta, S. Rahnama, J. Hellings, M. Sadoghi. PVLDB 2020. [Paper, Slides, Video]
  • Research Paper - RingBFT: Resilient Consensus over Sharded Ring Topology. S. Rahnama, S. Gupta, R. Sogani, D. Krishnan, M. Sadoghi. EDBT 2022. [Paper]
  • Demo Paper - Scalable, Resilient and Configurable Permissioned Blockchain Fabric. S. Rahnama, S. Gupta, T. Qadah, J. Hellings, M. Sadoghi. PVLDB 2020. [Paper, Slides, Video]
  • Tutorial - Building High Throughput Permissioned Blockchain Fabrics: Challenges and Opportunities. S. Gupta, S. Rahnama, J. Hellings, Mohammad Sadoghi. PVLDB 2020. [Paper, Slides: Theory | System, Video]
  • Architecture - Permissioned Blockchain Through the Looking Glass: Architectural and Implementation Lessons Learned. S. Gupta, S. Rahnama, Mohammad Sadoghi. ICDCS 2020. [Preprint, Slides, Video]
  • Others - ResilientDB Fabric Website, Journey, Vision/Slides, Bitcoin Radio, Code, Hands-on, Blog, Wiki.
  • Readings in Database Systems. 5th Edition. Joseph M. Hellerstein, Michael Stonebraker. 2015
  • The Elements of Style. 4th Edition. William Strunk Jr. and E. B. White. 1999
  • Style: Toward Clarity and Grace. Joseph M. Williams. 1995

Project Description:

The most important component of the course is a quarter-long creative project on topics broadly related to distributed transactions, consensus protocols, and distributed ledgers (blockchain fabrics). The project must be done in groups* (up to five students), and the role and contributions of each group member must be clearly articulated. All projects must be approved and supervised by the instructor. The outcome of the final project will consist of the following. (1) Designing novel consensus protocols; benchmarking and studying existing consensus protocols; developing novel techniques or architectures to improve the performance of a blockchain fabric (e.g., sharding, cross-chain, hybrid protocols [ResilientDB Fabric, Codebase, Hands-on Tutorial, Blog, Wiki]); or developing novel applications for blockchain (e.g., DeFi applications with [Radix DeFi Engine, Overview of Scrypto Smart Contracts, and Scrypto Documentation]) (2) Writing a grant proposal for startup seed funding or a short research paper with the ultimate goal of submitting to a top-tier conference, of course, submission is optional and does not affect your final grade. All project related-reports must be written in LaTex using ACM templates or in a form a detailed and accessible blog or professional website (e.g., Blog, CryptoLunch, Resilient Record).

Grant Proposal Track

Whether you choose the grant proposal or research paper route, there needs to be a proof-of-concept prototype of your ideas in order to demonstrate feasibility or effectiveness. For the grant proposal track, we will adopt the well-known Small Business Innovation Research (SBIR) program that fosters commercialization of research ideas by small businesses. SBIR project description template can be found here.

Research Track

For the research track, we adopt standard conference-style paper such as (PVLDB or SIGMOD). An example of a well-written blockchain research paper would be ResilientDB, and many more examples can be found in our ECS265 course.

Research Paper
A research paper (and/or presentation slides) would consist of the following key components
  1. Motivation (why this problem is important and interesting)
  2. Gap (what are the challenges and what are the shortcomings of the most relevant work)
  3. Problem statement
  4. Key intuitions behind your proposed approach and sketch of the solution in a top-down approach
  5. Key benefits of your proposed approach
  6. Overall architecture/design of your solution
  7. Details of your solution
  8. Analytical correctness and complexity results
  9. Implementation Details
  10. Evaluation methodology and plans (competing algorithms, datasets, platforms, etc.)
  11. Empirical results (e.g., latency, throughput, precision, recall, f-measure, etc.)
  12. List of contributions and their importance (both scientific and technical contributions)
  13. Conclusions
  14. Future Directions/Next Steps
Benchmarking Paper
A benchmarking paper (and/or presentation slides) would consist of the following key components
  1. Motivation (why this problem is important and interesting)
  2. Gap (what are the challenges and what are the shortcomings of the most relevant benchmarking work)
  3. Problem statement
  4. A brief survey/discussion of relevant approaches/systems
  5. Justification for choosing these approaches/systems
  6. Evaluation Methodology:
    machines/cluster requirements & setup, single-node and multi-nodes configurations in-memory vs. disk-based approaches, selected systems/engines, data workloads (synthetic data and/or real data), query workloads, types of experiments.
  7. List of your contributions and their importance (both scientific and technical contributions)
  8. Conclusions
  9. Future Directions/Next Steps

Weekly Progress Meetings:

It is important that tangible progress is made every week. Progress must be captured in slides and can be any of the following categories. To make healthy progress, we need to satisfy one or more of the categories below each week. Of course, there could be an exception if you have several tests/exams, etc. in one week. But generally, it is important to have a consistent and healthy momentum. To make steady progress, at least one of the categories below must be fulfilled each week.

  • Proposing a new well-thought idea (captured clearly on the slide),
  • Solidifying an existing idea or algorithm (captured clearly on the slide),
  • Providing correctness or time/space complexity proofs of the proposed algorithms/protocols,
  • Formulating a set of new challenges with partial solutions (also captured clearly on the slides),
  • Developing a module of the ongoing research prototype,
  • Reporting detailed implementation progress,
  • Evaluating and presenting a set of experimental results, and/or
  • Completing the write-up for several sections of a research paper

Weekly Presentation:

Each student is expected to present 1 (or 2 depending on the number of students taking the course) papers throughout the semester. All presentations are done in groups, so up to 4 students need to coordinate, study, and present the paper together. Students must also submit their presentation slides to be posted on the website. Students must coordinate with the TA in advance for the paper they wish to present (we will adopt the first-come, first-served when assigning papers). All paper selection must be sent to the TA before September 30, if no selection is received by the due date, a random assignment will be adopted by the TA. We will start the student presentation starting October 3, 2022. Each week, we will have two presentations, one on Monday and one on Wednesday. We dedicate the Friday of each week to have an even deeper discussion on the papers presented during the week. The two groups of presenters will lead the Friday discussion, so it is expected that all of them coordinate and prepare before the Friday meeting. Please note that the two papers selected for each week are highly related to ensure a coherent and exciting discussion.

Furthermore, everyone is expected to attend all lectures, namely, participating and attending when your peers (other students in the class) are presenting their papers. So half of the presentation grade will be reserved for giving a high-quality presentation and the other half would be dedicated to class participation, i.e., attending every lecture and asking insightful questions!

Additionally, every week, each student must choose one paper among the papers that are being presented that week and write a conference-style review. A standard template for writing a review consists of (1) explaining the paper in your own words in one or two paragraphs; (2) three strong points; (3) three weak points; (4) detailed concise comments that expand upon the weak points and beyond. When writing strong and weak points please avoid general comments such as weak motivations or poor writings and be as specific as possible. In your detailed comments, in the end, you could include such general comments.

* With prior permission from the instructor, the projects can be done individually if there is sufficient justification.

Team Dynamics:

For each group, it is recommended that each member lead one aspect of the project while contributing and learning about other parts. It is further recommended a project manager is elected within a group to coordinate all efforts and manage the group dynamics in addition to making technical contributions to the project.

A fact of life, when there is group work, whether at school or in society, there are occasional conflicts; and it is crucial to learn how to resolve our differences and be receptive, open, and kind to one another. In kindness and reflection, we shall aim to resolve all conflicts. It is the group's responsibility to handle all internal affairs, and only when absolutely necessary involving the instructor. But note, only under very rare exceptional circumstances, a group re-structuring would be granted because once the group is formed, at least for 10 weeks, we must learn how to work with each other in harmony.


The final grade will be based upon the following components:
  • Paper Reviews: 10% (Due on Sundays)
  • Paper Presentations: 20% (Selection due on September 30, 2022)
  • Project Proposal Report (2 pages):   20% (Due on October 21, 2022)
    [Project Pitch Template]
  • Mid-term Progress Report (4 pages):   20% (Due on November 11, 2022)
    [Building upon project proposal, identifying key milestones/objectives broken down into activities and tasks, timelines, and associated risks, and what has been achieved so far...]
  • Final Review Report (8 pages):   30% (Due on December 2, 2022)
    [SBIR Project Description Template, Research Paper] OR [Blog, Websites: CryptoLunch, Resilient Record]

Course Policy:

In this class, we adopt the UC Davis Code of Academic Conduct available here.


Below you will find a comprehensive collection of papers related to the recent developments in the broad area of distributed transaction processing. For the purpose of this course. We will be discussing only a small subset of these papers. Students are encouraged to identify other relevant papers as well.

  • Eric Brewer. Towards robust distributed systems. PODC'00
  • Seth Gilbert and Nancy Lynch. Brewer’s conjecture and the feasibility of consistent, available, partition- tolerant web services. SIGACT News’02
  • Pat Helland. Life beyond distributed transactions: an apostate’s opinion. CIDR’07
  • Dan Pritchett. BASE: An Acid Alternative. Queue’08
  • Hiroshi Wada, Alan Fekete, Liang Zhao, Kevin Lee, and Anna Liu. Data consistency properties and the trade-offs in commercial cloud storage: the consumers’ perspective. CIDR’11
  • Eric Brewer. CAP twelve years later. How the “rules” have changed. Computer’12
  • D. Abadi. Consistency tradeoffs in modern distributed database system design: CAP is only part of the story. Computer’12
  • Peter Bailis and Ali Ghodsi. Eventual consistency today: Limitations, extensions, and beyond. ACM Queue’13
  • Jeffrey Dean and Luiz André Barroso. The tail at scale. Commun. ACM’13
  • Sean Hull. 20 obstacles to scalability. Commun. ACM’13
  • Peter Bailis and Kyle Kingsbury. The network is reliable: An informal survey of real-world communications failures. ACM Queue’14
  • Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. Highly available transactions: Virtues and limitations. PVLDB’14
  • Divy Agrawal, Amr El Abbadi, and Kenneth Salem. A taxonomy of partitioned replicated cloud-based database systems. IEEE Data Eng. Bull.’15
  • Jose M. Faleiro and Daniel J. Abadi. FIT: A distributed database performance tradeoff. IEEE Data Eng. Bull.’15
  • Rachid Guerraoui, Matej Pavlovic, Dragos-Adrian Seredinschi: Trade-offs in Replicated Systems. IEEE Data Eng. Bull.'16
  • M. K. Aguilera, D. B. Terry. The Many Faces of Consistency. IEEE Data Eng. Bull. 39(1): 3-13 (2016)
  • Faisal Nawab, Divyakant Agrawal, and Amr El Abbadi. The challenges of global-scale data management. SIGMOD’16
  • Andrew Pavlo and Matthew Aslett. What’s really new with NewSQL? SIGMOD Rec’16
  • H. Berenson, P. A. Bernstein, J. Gray, J. Melton, E. J. O'Neil, P. E. O'Neil. A Critique of ANSI SQL Isolation Levels. SIGMOD'95
  • Jim Gray. The transaction concept: Virtues and limitations. VLDB’81
  • Per-Åke Larson, Spyros Blanas, Cristian Diaconu,Craig Freedman, Jignesh M.Patel, and MikeZwilling. High-performance concurrency control mechanisms for main-memory databases. PVLDB’11
  • David B. Lomet, Alan Fekete, Rui Wang, and Peter Ward. Multi-version concurrency via timestamp range conflict management. ICDE’12
  • Stephen Tu, Wenting Zheng, Eddie Kohler, Barbara Liskov, and Samuel Madden. Speedy transactions in multicore in-memory databases. SOSP ’13
  • Mohammad Sadoghi, Kenneth A. Ross, Mustafa Canim, and Bishwaranjan Bhattacharjee. Making updates disk-I/O friendly using SSDs. PVLDB’13 [Paper]
  • Hyuck Han, Seongjae Park, Hyungsoo Jung, Alan Fekete, Uwe Röhm, and Heon Y. Yeom. Scalable serializable snapshot isolation for multicore systems. ICDE’14
  • Mohammad Sadoghi, Mustafa Canim, Bishwaranjan Bhattacharjee, Fabian Nagel, and Kenneth A. Ross. Reducing database locking contention through multi-version concurrency. PVLDB’14 [Paper]
  • Jose Faleiro, Alexander Thomson, and Daniel J Abadi. Lazy evaluation of transactions in database systems. SIGMOD’14
  • Xiangyao Yu, George Bezerra, Andrew Pavlo, Srinivas Devadas, Michael Stonebraker: Staring into the Abyss: An Evaluation of Concurrency Control with One Thousand Cores. PVLDB'14
  • Jose M. Faleiro and Daniel J. Abadi. Rethinking serializable multiversion concurrency control. Proc. VLDB’15
  • Thomas Neumann, Tobias Mühlbauer, and Alfons Kemper. Fast serializable multi-version concurrency control for main-memory database systems. SIGMOD’15
  • John Meehan, Nesime Tatbul, Stan Zdonik, Cansu Aslantas, Ugur Çetintemel, Jiang Du, Tim Kraska, Samuel Madden, David Maier, Andrew Pavlo, Michael Stonebraker, Kristin Tufte, Hao Wang: S-Store: Streaming Meets Transaction Processing. PVLDB'15
  • H. Kimura. FOEDUS: OLTP engine for a thousand cores and NVRAM. SIGMOD'15.
  • Yuan Yuan, Kaibo Wang, Rubao Lee, Xiaoning Ding, Jing Xing, Spyros Blanas, Xiaodong Zhang: BCC: Reducing False Aborts in Optimistic Concurrency Control with Low Cost for In-Memory Databases. PVLDB'16
  • C. Yao, D. Agrawal, G. Chen, Q. Lin, B. C. Ooi, W. F. Wong, and M. Zhang. Exploiting single-threaded model in multi-core in-memory systems. TKDE’16
  • Xiangyao Yu, Andrew Pavlo, Daniel Sanchez, and Srinivas Devadas. TicToc: Time traveling optimistic concurrency control. SIGMOD’16
  • Kangnyeon Kim, Tianzheng Wang, Ryan Johnson, and Ippokratis Pandis. ERMIA: fast memory-optimized database system for heterogeneous workloads. SIGMOD’16
  • Kun Ren, Jose M. Faleiro, and Daniel J. Abadi. Design principles for scaling multi-core OLTP under high contention. SIGMOD’16
  • Tianzheng Wang, Hideaki Kimura: Mostly-Optimistic Concurrency Control for Highly Contended Dynamic Workloads on a Thousand Cores. PVLDB'17
  • T. Qadah, M. Sadoghi. QueCC: A Queue-oriented, Control-free Concurrency Architecture. Middleware 2018. [Paper]
  • Leslie Lamport. Time, clocks, and the ordering of events in a distributed system. Commun. ACM’78
  • A. Chan and R. Gray. Implementing distributed read-only transactions. TSE’85
  • C. Mohan, Bruce G. Lindsay, Ron Obermarck: Transaction Management in the R* Distributed Database Management System. TODS'86
  • Jim Gray and Leslie Lamport. Consensus on transaction commit. TODS’06
  • Philip A. Bernstein, Alan Fekete, Hongfei Guo, Raghu Ramakrishnan, and Pradeep Tamma. Relaxed-currency serializability for middle-tier caching and replication. SIGMOD’06
  • Robert Kallman, Hideaki Kimura, Jonathan Natkins, Andrew Pavlo, Alex Rasin, Stanley B. Zdonik, Evan P. C. Jones, Samuel Madden, Michael Stonebraker, Yang Zhang, John Hugg, and Daniel J. Abadi. H-store: a high-performance, distributed main memory transaction processing system. PVLDB’08
  • Evan P.C. Jones, Daniel J. Abadi, and Samuel Madden. Low overhead concurrency control for partitioned main memory databases. SIGMOD’10
  • Sudipto Das, Divyakant Agrawal, and Amr El Abbadi. G-Store: a scalable data store for transactional multi key access in the cloud. SoCC’10
  • Carlo Curino, Evan Jones, Yang Zhang, and Sam Madden. Schism: a workload-driven approach to database replication and partitioning. VLDB’10
  • Philip A. Bernstein, Colin W. Reid, Ming Wu, and Xinhao Yuan. Optimistic concurrency control by melding trees. PVLDB’11
  • Jason Baker, Chris Bond, James C. Corbett, J. J. Furman, Andrey Khorlin, James Larson, Jean-Michel Leon, Yawei Li, Alexander Lloyd, and Vadim Yushprakh. Megastore: Providing scalable, highly available storage for interactive services. CIDR 2011
  • Alexander Thomson, Thaddeus Diamond, Shu-Chun Weng, Kun Ren, Philip Shao, and Daniel J. Abadi. Calvin: fast distributed transactions for partitioned database systems. SIGMOD’12
  • Sudipto Das, Divyakant Agrawal, and Amr El Abbadi. ElasTraS: An elastic, scalable, and self- managing transactional database for the cloud. TODS’13
  • Justin J. Levandoski, David B. Lomet, Mohamed F. Mokbel, and Kevin Zhao. Deuteronomy: Transaction support for cloud data. In CIDR 2011
  • James A. Cowling, Barbara Liskov: Granola: Low-Overhead Distributed Transaction Coordination. USENIX ATC'12
  • Peter Alvaro, Neil Conway, Joseph M. Hellerstein, and David Maier. Blazes: Coordination analysis for distributed programs. ICDE’14
  • Hatem A. Mahmoud, Vaibhav Arora, Faisal Nawab, Divyakant Agrawal, and Amr El Abbadi. MaaT: Effective and scalable coordination of distributed transactions in the cloud. PVLDB’14
  • Sudip Roy, Lucja Kot, Gabriel Bender, Bailu Ding, Hossein Hojjat, Christoph Koch, Nate Foster, and Johannes Gehrke. The homeostasis protocol: Avoiding transaction coordination through program analysis. SIGMOD’15
  • Akon Dey, Alan Fekete, and Uwe Röhm. Scalable distributed transactions across heterogeneous stores. ICDE’15
  • Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. Feral concurrency control: An empirical investigation of modern application integrity. SIGMOD’15
  • Simon Loesing, Markus Pilman, Thomas Etter, Donald Kossmann: On the Design and Scalability of Distributed Shared-Data Databases. SIGMOD'15
  • Aleksandar Dragojevic, Dushyanth Narayanan, Edmund B. Nightingale, Matthew Renzelmann, Alex Shamis, Anirudh Badam, Miguel Castro: No compromises: distributed transactions with consistency, availability, and performance. SOSP'15
  • Philip A. Bernstein and Sudipto Das. Scaling optimistic concurrency control by approximately partitioning the certifier and log. IEEE Data Eng. Bull.’15
  • Mohammad Sadoghi, Martin Jergler, Hans-Arno Jacobsen, Richard Hull, and Roman Vaculín. Safe distribution and parallel execution of data-centric workflows over the publish/subscribe abstraction. TKDE’15 [Paper]
  • Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. Scalable atomic visibility with RAMP transactions. TODS’16
  • Chang Yao, Divyakant Agrawal, Gang Chen, Beng Chin Ooi, and Sai Wu. Adaptive logging: Optimizing logging and recovery costs in distributed in-memory databases. SIGMOD’16
  • Tamer Eldeeb, Phil Bernstein. Transactions for Distributed Actors in the Cloud. MSR TechReport'16
  • Rachael Harding, Dana Van Aken, Andrew Pavlo, Michael Stonebraker. An Evaluation of Distributed Concurrency Control. VLDB'17
  • S. Gupta, M. Sadoghi. EasyCommit: A Non-blocking Two-phase Commit Protocol. EDBT'18 [Paper]
  • S.B. Davidson, H. Garcia-Molina, and D. Skeen. Consistency in partitioned networks. ACM CSUR’85
  • Maurice Herlihy, Jeannette M. Wing: Linearizability: A Correctness Condition for Concurrent Objects. ACM Trans. Program. Lang. Syst.'90
  • Divyakant Agrawal et al. Consistency and orderability: semantics-based correctness criteria for databases. TODS’93
  • Rajeev Rastogi, Sharad Mehrotra, Yuri Breitbart, Henry F. Korth, and Avi Silberschatz. On correctness of non-serializable executions. PODS’93
  • Atul Adya. Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions. PhD Thesis. MIT'99
  • Werner Vogels. Eventually consistent. Queue’08
  • Alan David Fekete and Krithi Ramamritham. Consistency models for replicated data. In Replication: Theory and Practice’10
  • Wyatt Lloyd, Michael J. Freedman, Michael Kaminsky, and David G. Andersen. Don’t settle for eventual: Scalable causal consistency for wide-area storage with COPS. SOSP ’11
  • Peter Alvaro, Neil Conway, Joseph M. Hellerstein, and William R. Marczak. Consistency analysis in Bloom: a CALM and collected approach. CIDR’11
  • David Bermbach and Stefan Tai. Eventual consistency: How soon is eventual? An evaluation of Amazon S3’s consistency behavior. MW4SOC’11
  • Lisa Glendenning, Ivan Beschastnikh, Arvind Krishnamurthy, Thomas E. Anderson: Scalable consistency in Scatter. SOSP'11
  • Sebastian Burckhardt, Daan Leijen, Manuel Fähndrich, and Mooly Sagiv. Eventually consistent transactions. ESOP’12
  • Neil Conway, William R Marczak, Peter Alvaro, Joseph M Hellerstein, and David Maier. Logic and lattices for distributed programming. SoCC’12
  • Kamal Zellag and Bettina Kemme. How consistent is your cloud application? SoCC’12
  • Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. The potential dangers of causal consistency and an explicit solution. SoCC ’12
  • Peter Alvaro, Peter Bailis, Neil Conway, and Joseph M. Hellerstein. Consistency without borders. SoCC’13
  • Peter Bailis, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. Bolt-on causal consistency. SIGMOD’13
  • Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. Coordination avoidance in database systems. PVLDB’14
  • Valter Balegas, Sérgio Duarte, Carla Ferreira, Rodrigo Rodrigues, Nuno Preguiça, Mahsa Najafzadeh, and Marc Shapiro. Putting consistency back into eventual consistency. EuroSys’15
  • Marek Zawirski, Nuno Preguiça, Sérgio Duarte, Annette Bieniusa, Valter Balegas, and Marc Shapiro. Write fast, read in the past: Causal consistency for client-side applications. Middleware’15
  • Paolo Viotti, Marko Vukolic: Consistency in Non-Transactional Distributed Storage Systems. ACM Comput. Surv.'16
  • Natacha Crooks, Youer Pu, Nancy Estrada, Trinabh Gupta, Lorenzo Alvisi, Allen Clement: TARDiS: A Branch-and-Merge Approach To Weak Consistency. SIGMOD'16
  • Yair Sovran, Russell Power, Marcos K. Aguilera, and Jinyang Li. Transactional storage for geo-replicated systems. SOSP ’11
  • Stacy Patterson, Aaron J. Elmore, Faisal Nawab, Divyakant Agrawal, and Amr El Abbadi. Serializability, not serial: Concurrency control and availability in multi-datacenter datastores. PVLDB’12
  • C. Li, D. Porto, A. Clement, J. Gehrke, N. Preguiça, and R. Rodrigues. Making geo-replicated systems fast as possible, consistent when necessary. OSDI’12
  • Tim Kraska, Gene Pang, Michael J. Franklin, Samuel Madden, and Alan Fekete. MDCC: multi-data center consistency. EuroSys ’13
  • Faisal Nawab, Divyakant Agrawal, and Amr El Abbadi. Message futures: Fast commitment of transactions in multi-datacenter environments. CIDR’13
  • Zhe Wu, Michael Butkiewicz, Dorian Perkins, Ethan Katz-Bassett, and Harsha V. Madhyastha. Spanstore: Cost-effective geo-replicated storage spanning multiple cloud services. SOSP ’13
  • Yang Zhang, Russell Power, Siyuan Zhou, Yair Sovran, Marcos K. Aguilera, and Jinyang Li. Transaction chains: achieving serializability with low latency in geo-distributed storage systems. SOSP’13
  • Wyatt Lloyd, Michael J. Freedman, Michael Kaminsky, David G. Andersen. Stronger semantics for low-latency geo-replicated storage. NSDI’13
  • Hatem A. Mahmoud, Faisal Nawab, Alexander Pucher, Divyakant Agrawal, Amr El Abbadi: Low-Latency Multi-Datacenter Databases using Replicated Commit. PVLDB'13
  • Gene Pang, Tim Kraska, Michael J. Franklin, and Alan Fekete. PLANET: making progress with commit processing in unpredictable environments. SIGMOD’14
  • Dan Dobre, Paolo Viotti, Marko Vukolic: Hybris: Robust Hybrid Cloud Storage. SoCC'14
  • Faisal Nawab, Vaibhav Arora, Divyakant Agrawal, and Amr El Abbadi. Chariots: A scalable shared log for data management in multi-datacenter cloud environments. EDBT’15
  • Faisal Nawab, Vaibhav Arora, Divyakant Agrawal, and Amr El Abbadi. Minimizing commit latency of transactions in geo-replicated data stores. SIGMOD’15
  • Alexander Thomson and Daniel J. Abadi. CalvinFS: Consistent WAN replication and scalable metadata management for distributed file systems. FAST’15
  • Ashish Gupta, Fan Yang, Jason Govig, Adam Kirsch, Kelvin Chan, Kevin Lai, Shuo Wu, Sandeep Govind Dhoot, Abhilash Rajesh Kumar, Ankur Agiwal, Sanjay Bhansali, Mingsheng Hong, Jamie Cameron, Masood Siddiqi, David Jones, Jeff Shute, Andrey Gubarev, Shivakumar Venkatara- man, and Divyakant Agrawal. Mesa: a geo-replicated online data warehouse for Google’s advertising system. Commun. ACM’16
  • Bettina Kemme and Gustavo Alonso. Don’t be lazy, be consistent: Postgres-R, A new way to implement database replication. In VLDB 2000
  • Bettina Kemme and Gustavo Alonso. A new approach to developing and implementing eager database replication protocols. TODS’00
  • Bettina Kemme, Fernando Pedone, Gustavo Alonso, André Schiper, and Matthias Wiesmann. Using optimistic atomic broadcast in transaction processing systems. IEEE Trans. Knowl. Data Eng.’03
  • K. Daudjee and K. Salem. Lazy database replication with ordering guarantees. In ICDE’04
  • Robbert van Renesse and Fred B. Schneider. Chain Replication for Supporting High Throughput and Availability. OSDI’04.
  • Yasushi Saito and Marc Shapiro. Optimistic replication. ACM Comput. Surv.’05
  • Yi Lin, Bettina Kemme, Ricardo Jiménez-Peris, et al. Snapshot isolation and integrity constraints in replicated databases. TODS’09
  • Flavio Paiva Junqueira, Benjamin C. Reed, Marco Serafini: Zab: High-performance broadcast for primary-backup systems. DSN'11
  • Marc Shapiro, Nuno M. Preguiça, Carlos Baquero, and Marek Zawirski. Conflict-free replicated data types. SSS’11
  • Mihaela A. Bornea, Orion Hodson, Sameh Elnikety, and Alan Fekete. One-copy serializability with snapshot isolation under the hood. ICDE’11
  • Hyungsoo Jung, Hyuck Han, Alan Fekete, and Uwe Röhm. Serializable snapshot isolation for replicated databases in high-update scenarios. PVLDB’11
  • Peter Bailis, Shivaram Venkataraman, Michael J. Franklin, Joseph M. Hellerstein, and Ion Stoica. Probabilistically bounded staleness for practical partial quorums. PVLDB’12
  • Diego Ongaro and John Ousterhout. In search of an understandable consensus algorithm. USENIX ATC’14
  • Peter Bailis, Shivaram Venkataraman, Michael J. Franklin, Joseph M. Hellerstein, and Ion Stoica. Quantifying eventual consistency with PBS. Commun. ACM’14
  • Carlos Eduardo Benevides Bezerra, Fernando Pedone, Robbert van Renesse: Scalable State-Machine Replication. DSN'14
  • Irene Zhang, Naveen Kr. Sharma, Adriana Szekeres, Arvind Krishnamurthy, Dan R. K. Ports: Building consistent transactions with inconsistent replication. SOSP'15
  • Valter Balegas, Cheng Li, Mahsa Najafzadeh, Daniel Porto, Allen Clement, Sérgio Duarte, Carla Ferreira, Johannes Gehrke, João Leitão, Nuno M. Preguiça, Rodrigo Rodrigues, Marc Shapiro, Viktor Vafeiadis: Geo-Replication: Fast If Possible, Consistent If Necessary. IEEE Data Eng. Bull.’16
  • Leslie Lamport. The Part-time Parliament. ACM Trans. Comput. Syst.'98.
  • M. Castro, B. Liskov: Practical Byzantine Fault Tolerance. OSDI'99
  • Leslie Lamport, Paxos Made Simple, Microsoft Tech Report'01
  • Klaus Kursawe. Optimistic Byzantine Agreement. SRDS’02.
  • Miguel Correia, Nuno Ferreira Neves, and Neves Paulo VerÃŋssimo. How to Tolerate Half Less One Byzantine Nodes in Practical Distributed Systems. SRDS’02.
  • Jian Yin, Jean-Philippe Martin, Arun Venkataramani, Lorenzo Alvisi, and Mike Dahlin. Separating Agreement from Execution for Byzantine Fault Tolerant Services. SOSP ’03.
  • Miguel Castro, Rodrigo Rodrigues, and Barbara Liskov. BASE: Using Abstraction to Improve Fault Tolerance. ACM Trans. Comput. Syst.'03.
  • Ramakrishna Kotla and Mike Dahlin. High Throughput Byzantine Fault Tolerance. DSN’04.
  • Barbara Liskov and Rodrigo Rodrigues. Byzantine Clients Rendered Harmless. DISC’05.
  • Michael Abd-El-Malek, Gregory R. Ganger, Garth R. Goodson, Michael K. Reiter, and Jay J. Wylie. Fault-scalable Byzantine Fault-tolerant Services. SOSP’05
  • James Cowling, Daniel Myers, Barbara Liskov, Rodrigo Rodrigues, and Liuba Shrira. HQ Replication: A Hybrid Quorum Protocol for Byzantine Fault Tolerance. OSDI’06.
  • Jean-Philippe Martin and Lorenzo Alvisi. Fast Byzantine Consensus. IEEE Trans. Dependable Secur. Comput'06.
  • Leslie Lamport. Fast Paxos. Distributed Computing’06
  • Ramakrishna Kotla, Lorenzo Alvisi, Michael Dahlin, Allen Clement, Edmund L. Wong: Zyzzyva: speculative byzantine fault tolerance. SOSP'07
  • A. Singh, P. Maniatis, P. Druschel, and T. Roscoe. Conflict-free Quorum-based BFT Protocols. Max Planck Technical Report'07.
  • Byung-Gon Chun, Petros Maniatis, Scott Shenker, and John Kubiatowicz. Attested Append-only Memory: Making Adversaries Stick to Their Word. SOSP’07.
  • Allen Clement,Edmund Wong,Lorenzo Alvisi,Mike Dahlin,and Mirco Marchetti. Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults. NSDI’09.
  • Giuliana Santos Veronese, Miguel Correia, Alysson Neves Bessani, and Lau Cheuk Lung. Spin One’s Wheels? Byzantine Fault Tolerance with a Spinning Primary. SRDS’09.
  • Marco Sera ni, Péter Bokor, Dan Dobre, Matthias Majuntke, and Neeraj Suri. Scrooge: Reducing the costs of fast Byzantine repli- cation in presence of unresponsive replicas. DSN'10.
  • J. Kirsch, B. Coan, Y. Amir, and J. Lane. Prime: Byzantine Replication under Attack. TDSC'10.
  • Timothy Wood, Rahul Singh, Arun Venkataramani, Prashant Shenoy, and Emmanuel Cecchet. ZZ and the Art of Practical BFT Execution. EuroSys’11.
  • Rüdiger Kapitza, Johannes Behl, Christian Cachin, Tobias Distler, Simon Kuhnle, Seyed Vahid Mohammadi, Wolfgang Schröder-Preikschat, and Klaus Stengel. CheapBFT: Resource-efficient Byzantine Fault Tolerance. EuroSys’12.
  • Manos Kapritsos, Yang Wang, Vivien Quéma, Allen Clement, Lorenzo Alvisi, Mike Dahlin: All about Eve: Execute-Verify Replication for Multi-Core Servers. OSDI'12
  • Pierre-Louis Aublin, Sonia Ben Mokhtar, and Vivien Quéma. RBFT: Redundant Byzantine Fault Tolerance. ICDCS'13.
  • Giuliana Santos Veronese, Miguel Correia, Alysson Neves Bessani, Lau Cheuk Lung, and Paulo Verissimo. Efficient Byzantine Fault-Tolerance. IEEE Trans. Comput.'13.
  • Sisi Duan, Hein Meling, Sean Peisert, Haibin Zhang. BChain: Byzantine Replication with High Throughput and Embedded Reconfiguration. OPODIS'14.
  • Diego Ongaro and John Ousterhout. In Search of an Understandable Consensus Algorithm. USENIX ATC’14.
  • Bessani, Alysson, João Sousa, and Eduardo EP Alchieri. "State machine replication for the masses with BFT-SMaRt." DSN'14
  • Sisi Duan, Sean Peisert, and Karl N. Levitt. hBFT: Speculative Byzantine Fault Tolerance with Minimum Cost. TDSC'15.
  • Johannes Behl, Tobias Distler, and Rüdiger Kapitza. Hybrids on Steroids: SGX-Based High Performance BFT. EuroSys’17.
  • Maofan Yin, Dahlia Malkhi, Michael K. Reiter, Guy Golan Gueta, Ittai Abraham. "HotStuff: BFT Consensus in the Lens of Blockchain" PODC'19
  • Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson. Impossibility of distributed consensus with one faulty process. Journal of the ACM, 32(2)
  • Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems
  • Suyash Gupta, Jelle Hellings and Mohammad Sadoghi: Brief Announcement: revisiting consensus protocols through wait-free parallelization DISC'19
  • Jelle Hellings and Mohammad Sadoghi: Brief Announcement: the fault-tolerant cluster-sending problem DISC'19
  • Faisal Nawab and Mohammad Sadoghi: Blockplane: A Global-Scale Byzantizing Middleware, ICDE'19
  • Markus Jakobsson and Ari Juels. 1999. Proofs of Work and Bread Pudding Protocols. CMS’99.
  • S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. 2008
  • Sunny King and Scott Nadal. PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake. 2012.
  • David Schwartz, Noah Youngs, and Arthur Britto. The Ripple Protocol Consensus Algorithm. 2014.
  • Giuseppe Ateniese, Ilario Bonacina, Antonio Faonio, Nicola Galesi. Proofs of Space: When Space Is of the Essence. SCN'14.
  • J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, E. W. Felten. SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies. IEEE SP'15
  • M. Vukolic. The Quest for Scalable Blockchain Fabric: Proof-of-Work vs. BFT Replication. iNetSeC'15
  • Sunoo Park, Krzysztof Pietrzak, Albert Kwon, Joël Alwen, Georg Fuchsbauer, Peter Gazi. Spacemint: A Cryptocurrency Based on Proofs of Space. IACR Cryptology ePrint Archive'15.
  • Gavin Wood. Ethereum: A secure decentralised generalised transaction ledger. 2015.
  • M. Pilkington. Blockchain Technology: Principles and Applications. 2016
  • C. Decker, J. Seidel, R. Wattenhofer. Bitcoin meets strong consistency. ICDCN'16
  • I. Eyal, A. Efe Gencer, E. Gün Sirer, R. v. Renesse: Bitcoin-NG: A Scalable Blockchain Protocol. NSDI'16
  • A. Hari, T. V. Lakshman. The Internet Blockchain: A Distributed, Tamper-Resistant Transaction Framework for the Internet. HotNets'16
  • T. McConaghy, R. Marques, A. Muller, D. De Jonghe, T. Troy McConaghy, G. McMullen, R. Henderson, S. Bellemare, and A. Granzotto. BigchainDB: A Scalable Blockchain Database. White Paper 2016.
  • Andrew Miller, Yu Xia, Kyle Croman, Elaine Shi, Dawn Song: The Honey Badger of BFT Protocols. CCS'16: 31-42
  • Eleftherios Kokoris Kogias, Philipp Jovanovic, Nicolas Gailly, Ismail Kho , Linus Gasser, and Bryan Ford. 2016. Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing. USENIX Security'16.
  • T. Dinh, J. Wang, G. Chen, R. Liu, B. Chin Ooi, K.-L. Tan: BlockBench: A Framework for Analyzing Private Blockchains. SIGMOD'17
  • E. Cecchetti, F. Zhang, Y. Ji, A. E. Kosba, A. Juels, E. Shi: Solidus: Confidential Distributed Ledger Transactions via PVORM. CCS'17
  • C. Cachin, M. Vukolic. Blockchain Consensus Protocols in the Wild (Keynote Talk). DISC'17
  • Dinh, Tien Tuan Anh, et al. "Untangling blockchain: A data processing view of blockchain systems." TKDE'17
  • Wang, Jiaping, and Hao Wang. "Monoxide: Scale out Blockchains with Asynchronous Consensus Zones." NSDI'19
  • Duan, Sisi, Michael K. Reiter, and Haibin Zhang. "BEAT: Asynchronous BFT made practical." CCS'18
  • Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, Nickolai Zeldovich. Algorand: Scaling Byzantine Agreements for Cryptocurrencies. SOSP'17
  • Mahdi Zamani, Mahnush Movahedi, and Mariana Raykova. "Rapidchain: Scaling blockchain via full sharding." CCS '18.
  • Diksha Gupta, Jared Saia, and Maxwell Young. Proof of Work Without All the Work. ICDCN’18.
  • Parity Technologies. Parity Ethereum Blockchain. 2018.
  • Maurice Herlihy. Blockchains from a distributed computing perspective. Communications of the ACM, 62(2)
  • Arvind Narayanan and Jeremy Clark. Bitcoin’s academic pedigree. Communications of the ACM, 60(12)
  • S. Gupta, M. Sadoghi. Blockchain Transaction Processing. Encyclopedia of Big Data Technologies. Springer'18 [Paper]
  • Elli Androulaki et al. Hyperledger fabric: a distributed operating system for permissioned blockchains. EuroSys'18
  • Yu Cao, Chun Chen, Fei Guo, Dawei Jiang, Yuting Lin, Beng Chin Ooi, Hoang Tam Vo, Sai Wu, and Quanqing Xu. ES2: A cloud data storage system for supporting both OLTP and OLAP. ICDE’11
  • A. Kemper and T. Neumann. HyPer: A hybrid OLTP and OLAP main memory database system based on virtual memory snapshots. ICDE’11
  • Joy Arulraj, Andrew Pavlo, and Prashanth Menon. Bridging the archipelago between row-stores and column-stores for hybrid workloads. SIGMOD’16
  • Mohammad Sadoghi, Souvik Bhattacherjee, Bishwaranjan Bhattacharjee, and Mustafa Canim. L-Store: A real-time OLTP and OLAP system. EDBT'18 [Paper]
  • J. Giceva, M. Sadoghi. Hybrid OLTP and OLAP. Encyclopedia of Big Data Technologies. Springer'18 [Paper]
  • Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Michael Burrows, Tushar Chandra, Andrew Fikes, and Robert Gruber. Bigtable: A distributed storage system for structured data. OSDI’06
  • Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. Dynamo: amazon’s highly available key-value store. SOSP’07
  • Brian F. Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Philip Bohannon, Hans- Arno Jacobsen, Nick Puz, Daniel Weaver, and Ramana Yerneni. PNUTS: yahoo!’s hosted data serving platform. PVLDB’08
  • Hoang Tam Vo, Sheng Wang, Divyakant Agrawal, Gang Chen, and Beng Chin Ooi. LogBase: A scalable log-structured database system in the cloud. PVLDB’12
  • James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Yasushi Saito, Michal Szymaniak, Christopher Taylor, Ruth Wang, and Dale Woodford. Spanner: Google’s Globally-Distributed Database. OSDI’12
  • Jeff Shute, Radek Vingralek, Bart Samwel, Ben Handy, Chad Whipkey, Eric Rollins, Mircea Oancea, Kyle Littlefield, David Menestrina, Stephan Ellner, John Cieslewicz, Ian Rae, Traian Stancescu, and Himani Apte. F1: A distributed SQL database that scales. PVLDB’13
  • Lin Qiao et al. On brewing fresh Espresso: LinkedIn’s distributed data serving platform. SIGMOD’13
  • John K. Ousterhout, Parag Agrawal, David Erickson, Christos Kozyrakis, Jacob Leverich, David Mazières, Subhasish Mitra, Aravind Narayanan, Diego Ongaro, Guru M. Parulkar, Mendel Rosenblum, Stephen M. Rumble, Eric Stratmann, Ryan Stutsman: The case for RAMCloud. Commun. ACM'11
  • Diego Ongaro, Stephen M. Rumble, Ryan Stutsman, John K. Ousterhout, Mendel Rosenblum: Fast crash recovery in RAMCloud. SOSP'11
  • Viktor Leis, Alfons Kemper, and Thomas Neumann. Exploiting hardware transactional memory in main-memory databases. ICDE’14
  • Aleksandar Dragojevic, Dushyanth Narayanan, Miguel Castro, Orion Hodson: FaRM: Fast Remote Memory. NSDI'14
  • Hyeontaek Lim, Dongsu Han, David G. Andersen, Michael Kaminsky. MICA: A Holistic Approach to Fast In-Memory Key-Value Storage. NSDI'14
  • Stephen M. Rumble, Ankita Kejriwal, John K. Ousterhout: Log-structured memory for DRAM-based storage. FAST'14
  • Andrew Pavlo. Emerging hardware trends in large-scale transaction processing. IEEE Internet Computing’15
  • Marius Poke, Torsten Hoefler: DARE: High-Performance State Machine Replication on RDMA Networks. HPDC'15
  • Mohammadreza Najafi, Mohammad Sadoghi, Hans-Arno Jacobsen: The FQP Vision: Flexible Query Processing on a Reconfigurable Computing Fabric. SIGMOD Record'15 [Paper]
  • Kai Zhang, Kaibo Wang, Yuan Yuan, Lei Guo, Rubao Lee, Xiaodong Zhang: Mega-KV: A Case for GPUs to Maximize the Throughput of In-Memory Key-Value Stores. PVLDB'15
  • Yandong Wang, Li Zhang, Jian Tan, Min Li, Yuqing Gao, Xavier Guerin, Xiaoqiao Meng, and Shicong Meng. HydraDB: A resilient RDMA-driven key-value middleware for in-memory cluster computing. SC’15
  • Alexander Matveev, Nir Shavit, Pascal Felber, Patrick Marlier. Read-log-update: a lightweight synchronization mechanism for concurrent programming. SOSP'15
  • Xingda Wei, Jiaxin Shi, Yanzhe Chen, Rong Chen, Haibo Chen. Fast in-memory transaction processing using RDMA and HTM. SOSP'15
  • Yanzhe Chen, Xingda Wei, Jiaxin Shi, Rong Chen, Haibo Chen. Fast and general distributed transactions using RDMA and HTM. EuroSys'16
  • Sheng Li, Hyeontaek Lim, Victor W. Lee, Jung Ho Ahn, Anuj Kalia, Michael Kaminsky, David G. Andersen, Seongil O, Sukhan Lee, Pradeep Dubey. Achieving One Billion Key-Value Requests per Second on a Single Server. IEEE Micro'16
  • Mohammad Sadoghi, Kenneth A. Ross, Mustafa Canim, Bishwaranjan Bhattacharjee: Exploiting SSDs in operational multiversion databases. VLDB J.'16 [Paper]
  • Peter Bailis, Joy Arulraj, and Andrew Pavlo. Research for practice: distributed consensus and implications of NVM on database management systems. Commun. ACM’16
  • Zsolt István, David Sidler, and Gustavo Alonso, Marko Vukolić. Consensus in a Box: Inexpensive Coordination in Hardware. NSDI'16
  • Mohammadreza Najafi, Mohammad Sadoghi, Hans-Arno Jacobsen. Hardware Acceleration Landscape for Distributed Real-Time Analytics: Virtues and Limitations ICDCS'17 [Paper]


Top Projects


Grades will be made available on your Canvas account.