SIGTBD 2020

April 2, 2020 · Cambridge, MA
Sam DeLaughter
Quan Nguyen
Nancy Ouyang
You?
TIME
TITLE
XXX-YYY
TITLE
TITLE
AUTHORS

ABSTRACT

SIGTBD will be taking place in the gorgeous Hewlett Room (32-G882). A sprawling conference room in the heart of Cambridgeport which can accommodate a bunch of people.
Navigate the the building located at ECEF coordinates: (1529595, -4465281, 4275399). Then, starting from the ground floor
take the #16 elevator down to the subsubbasement
(exit out the back)
walk through the stalagmites until you see a blue one
turn right, then immediately turn to face the West (Adam West, not Wild Wild West)
you should see a short flight of stairs over his left shoulder
-DO NOT GO UP THESE-
Batman will take you to the ropeladder up to the main entrance.

If you have any questions, concerns, comments, or stories to share, you may reach the SIGTBD program committee via email at sigtbd@csail.mit.edu. Please be sure to thank each conference organizer personally, and at length, in all your communications.

A reminder to all authors: SIGTBD employs a double-blind paper review process, and authors are prohibited from de-anonymizing their work, including via communications with the SIGTBD committee. Vague references to your submitted work, such as in form of poetic allusions or puzzles, are acceptable. Communication by any means with strong anonymity guarantees is also encouraged.

Due to cancellations of events at CSAIL and MIT in response to COVID-19, SIGTBD'20 will not be held this year.

SIGTBD 2020 is the 5th annual conference in TBD, and is the premier venue for academic and semi-academic work, and simply has the best papers. Many papers! Including yours, if you submit your very finest work.

SIGTBD is the premier forum for all areas of computer science research, including big data, machine learning, theory, programming languages, systems, networking and so on. SIGTBD's emphases includes innovative, elegant (to the point of simplicity) and creative approaches for solving problems that are not traditionally served by the academic community. These problem spaces may be obsolete or unrealistic even by academic standards and are often of debatable research taste. View papers from previous years.

Have an idea, but don't want to write a full length paper or prepare a presentation? Does it fill a page? New in 2020, we will be piloting a poster session to allow the community to showcase the finest work that can fit on a single 8.5" x 11" sheet of paper (A4 also accepted). Poster presenters will have the opportunity to showcase their work in a lightning poster session (about 1 minute per poster).

The Call For Papers (CFP) for SIGTBD can be found here, with more general info here. The submission site is at HotCRP.

Instructions for Submission to SIGTBD20

This document provides instructions for submitting papers to SIGTBD. In an effort to respect the efforts of reviewers and in the interest of fairness to all prospective authors, we request that all submissions to SIGTBD follow the formatting and submission rules detailed below. Submissions that violate these instructions may not be reviewed, at the discretion of the program chair, in order to maintain a review process that is fair to all potential authors.

The committee will make every effort to judge each submitted paper on its own merits. There will be no target acceptance rate. We expect to accept a wide range of papers with appropriate expectations for evaluation — while papers that build on significant past work with strong evaluations are valuable, papers that open new areas with less rigorous evaluation are equally welcome and especially encouraged. In either case, what is important is that the paper’s claims are no stronger than what is supported by the evaluation. Given the wide range of topics covered by SIGTBD, every effort will be made to find expert any reviewers.

All questions regarding paper formatting and submission should be directed to the program chair.

The paper submission site is HotCRP.

Submission Resources
General Submission Requirements
  • Paper must be submitted in printable PDF format.
  • Text must be in a minimum 10pt font (not 9pt).
  • Submissions are double blind and author names must be removed.
  • On the published copy, authors must be ordered by ascending number of vowels.
  • Papers must be at most 11 pages in US letter format, not including references.
  • additional appendices (beyond 11 pages) are allowed in the paper submission.
  • There is no page limit for references.
  • Each reference must specify all authors (no et al.).
  • The rules above don't apply to posters.
    • Conference Requirements

      Authors of all accepted papers will be required to give a regular conference talk and submit a tweet abstract (no longer than 280 characters), and may optionally submit a Vine video advertising their talk. The published copy of the paper will be included in the SIGTBD20 Proceedings and available on the online SIGTBD digital library (subscription required for access).

      Summary of requirements:

      • Required: A 10 minute conference talk
      • Required: A short form abstract (< 280 characters)
      • Optional: A Vine video advertising the talk (Example)

      Poster session:

      • Required: A poster no larger than an 8 ½ x 11 piece of paper (Example)
      • Required: A 1 minute lightning talk
      Conflicts

      You should specify the names that represent actual conflicts - where an individual who is a conflict in the context of this conference shares the same initials as one of the co-authors. Individuals declared a conflict will be tagged with a unique identifier to prevent aliasing of entries in the database.

      Any interpersonal conflicts of interest must be declared in the form of a passive aggressive email sent to the PC chair. In the event an interpersonal conflict of interest is declared, the validity of the interpersonal conflict of interest will be determined by the PC. The best conflict of interest emails will be anonymized and included in the conference proceedings.

      Submission Formatting

      Papers must be submitted in PDF format and follow the standard two-column ACM proceedings style in 9-point font and be at most 10 pages, exclusive of the bibliography. The bibliography is excluded from the page count to encourage good citation practices and discourage illegible bibliographies. Citations can be either in numeric style or author-year style. Numeric citations always stand as a parenthetical note (e.g., “[42]”), while author-year citations may stand either as a noun phrase (e.g., “Church (1935)”), or as a parenthetical note (e.g., “(Church, 1935)”).

      If you are using LaTeX to typeset your paper, then we strongly suggest that you use this LaTeX class file (with the ‘sigtbd’ class option) and template. If you are using Microsoft Word, we recommend you give up on following any of the formatting requirements and submit in whatever format you please.

      Double Blind

      Reviewing will be double blind; therefore, please do not include any author names or links to author’s website, etc. on any submitted documents except in the space provided on the online submission form. Please take time to read the SIGTBD FAQ on Double Blind Reviewing, which gives a more comprehensive and authoritative account than this short paragraph. If you are improving upon your prior work, refer to your prior work in the third person and include a full citation for the work in the bibliography. For example, if you happened to be Collins and McCarthy, building on your own prior work, you might say something like: “While prior work [Backus et al. 1960; Collins 1960; McCarthy 1960] did X, Y, and Z, this paper additionally does W, and is therefore much better.” Do NOT omit or anonymize references for blind review.

      In general, you should aim to reduce the risk of accidental unblinding. For example, if your paper describes a system with a well-known name or codename, or you use a personally-identifiable naming convention for your work, you must use a different name for your submission (which you may indicate has been changed for the purposes of double-blind reviewing). You should also avoid revealing the institution where the work was performed.

      Figures and Tables

      Ensure that the figures and tables are legible. Please also ensure that you refer to your figures in the main text. Many reviewers print the papers in gray-scale. Therefore, please use colors for your figures and ensure that the different colors are not highly distinguishable in gray-scale, so that these reviewers can finally be persuaded to stop doing that.

      References

      There is no length limit for references. Each reference must explicitly list all authors of the paper. Papers not meeting this requirement will be rejected.

      Declaring Authors

      Enter all authors of the paper into the online paper submission tool upfront. Addition/removal of authors once the paper is accepted will have to be approved by the program chair, since it potentially undermines the goal of eliminating conflicts for reviewer assignment.

      Concurrent Submissions and Workshops

      By submitting a manuscript to SIGTBD, authors guarantee that they are adhering to the SIGPLAN Republication Policy. Please ensure that you are familiar with it. Violation of any of these conditions will lead to rejection.

      As always, if you are in doubt, it is best to contact the program chair.

      Finally, we also note that the ACM Plagiarism Policy covers a range of ethical issues concerning the misrepresentation of other works or one’s own work. To encourage proper citation of your work, all submissions must cite themselves in their own references section.

      Early Access in the Digital Library

      The SIGTBD proceedings may be available on the ACM Digital Library as early as May 30, 2034. Authors must consider any implications of this early disclosure of their work before submitting their papers.

      Acknowledgements

      This document is based heavily on ones prepared for previous conferences and we thank their program chairs; in particular, Sandhya Dwarkadas (ASPLOS’15), Sarita Adve (ASPLOS’14), Steve Keckler (ISCA’14), Christos Kozyrakis (Micro’13), Margaret Martonosi (ISCA’13), Onur Mutlu (Micro’12), Michael L. Scott (ASPLOS’12), and Steve Blackburn (SIGTBD’15).

      References

      J. W. Backus, F. L. Bauer, J. Green, C. Katz, J. McCarthy, A. J. Perlis, H. Rutishauser, K. Samelson, B. Vauquois, J. H. Wegstein, A. van Wijngaarden, and M. Woodger. Report on the algorithmic language ALGOL 60. Commun. ACM, 3(5):299– 314, May 1960. ISSN 0001-0782. doi: 10.1145/367236.367262.

      G. E. Collins. A method for overlapping and erasure of lists. Commun. ACM, 3(12):655–657, December 1960. doi: 10.1145/367487.367501.

      L. Lamport. LaTeX: A Document Preparation System. Addison- Wesley, Reading, Massachusetts, 2nd edition, 1994.

      J. McCarthy. Recursive functions of symbolic expressions and their computation by machine, part I. Commun. ACM, 3(4): 184–195, April 1960. doi: 10.1145/367177.367199.

Past Proceedings

2016

Keynote: Classification Trees for Determining Mood Affiliation of Males with Androgenic Alopecia
Michal Grzadkowski
Optimistic Fault Tolerance
Shoumik Palkar, Akshay Narayan, Paroma Varma
Traditionally, fault tolerance has been considered a pivotal component of system design. However, as modern commodity hardware becomes increasingly reliable, building dedicated fault tolerance systems becomes unnecessary. Therefore, while in this paper we propose nothing novel and make no technical contributions, we nevertheless identify a unique approach to fault tolerance, which we dub optimistic fault tolerance. We implement optimistic fault tolerance, or OFT, as a framework implemented as a C library which supports seamless interoperability with any large scale system. We provide a survey of real systems that demonstrates the rarity of system failure, and additionally show through benchmarks and simulation that across numerous workloads, our framework provides tolerance in the face of rare arbitrary failures.
Evident Logic
Ben Sherman
Many logical systems justify their consistency via proofs of cut elimination or normalization. However, these metatheoretic proofs inevitably use a transfinite induction which cannot be proved well-founded within the system itself. We present evident logic, a logical system with a normalization proof which does not require any induction that is stronger than what is available within the logical system itself. Via the Curry-Howard correspondence, this system may also be viewed a programming language for very efficient computations. We provide a typechecker and normalizer within Coq; all programs normalize in time constant with respect to their length. We discuss the potential advantages of working with weak foundations.
Falso: a simple, self-consistent logic
A. Amarilli, T. Bourgeat, B. Sherman
Conventional wisdom states that according to Gödel's incompleteness theorem, any consistent logical system which subsumes Peano arithmetic cannot prove its own consistency. However, Falso, a powerful higher-order logical framework, shows that this is not the case: Falso proves itself consistent, and this consistency proof, of course, tells us that Falso is consistent. Falso is also complete: every proposition can be proved or refuted. We use the Estatis Inc. HyperProver and HyperVerifier, which implements a complete decision procedure for Falso, to resolve several prominent open problems in theoretical computer science.
Motion Planning using Naturally Annoying Grammars (NAG)
Caelan R Garrett, Clement Gehring, Gustavo Goretkin, Zelda Mariet, Zi Wang
Motion planning is hard, but we can do it better.
A Polynomial Time Solution for 2-SAT
Will Leiserson
The question of whether the P and NP complexity classes are equivalent is unresolved. It is conjectured that they are inequivalent because of the vast number of NP-Complete languages (including the ever-popular 3-SAT) for which there is no known polynomial time solution. The author of this paper respectfully disagrees and presents a polynomial time solution to a problem closely related to 3-SAT: 2-SAT.
Solving the Dating Problem with the SENPAI Protocol
S. Dukhovni, J. Weisblat, I. Chung
The Dating Problem is an extremely difficult computational problem that has plagued humanity since time immemorial. The problem concerns two agents Alice and Bob (or Alex and Bob, or Alice and Beatrice), each of whom may or may not have a crush on the other. Each is initially unaware of the other's feelings, and if they have a crush, they would like to know whether the other does as well; however, each would like to reveal their crush only if the other shares the interest. This problem is of widespread interest and applicability; for example, a solution to the Dating Problem is a hitherto unremarked-on prerequisite for the Stable Marriage Algorithm.
How [verb, gerund] the [noun] affects the [adjective] [property] of [proper noun] [collection, plural].
M. Coulombe, P. Vasudevan
Word Blanks is a game in which one player has as input a secret paragraph with some words replaced with blanks with part-of- speech tags, such as [plural noun], and the other player has as input the experience of being human for long enough to understand parts-of-speech. The first player queries the other player with part-of-speech tags and receives words to put in the corresponding blanks. The goal of Word Blanks is to maximize the humor of the resulting paragraph. First, we collected data from a representative population of 11 theoretical computer science grad students as the second player in the game, then chose 5 filled paragraphs and asked 20 computer science grad students which of a pair was funnier given the order they were presented. Second, we took our data and used a logistic regression model to learn which paragraph in any ordered pair is more humorous, and applied the model to the other filled paragraphs. Our results are that humor dominance is a strict partial order, and that the second paragraph presented is usually the most humorous.
Learning Computational Complexity
M. Coulombe, P. Vasudevan
Very.
Cartesian Dual Products
James Koppel
7
Locally Bijective, Non-Noetherian, Ultra-Liouville Random Variables and Parabolic Lie Theory
S. Park, M. Cohen, D. Tsipras, G. Ramnarayan
Let ${\iota^{(\mathfrak{{a}})}} \ge e$. In \cite{cite:0}, the authors address the degeneracy of compactly intrinsic algebras under the additional assumption that $u$ is not greater than $\mathscr{{Q}}$. We show that $\bar{\mathbf{{f}}} = \infty$. Now the groundbreaking work of K. Zhou on anti-composite hulls was a major advance. It is essential to consider that $D'$ may be commutative. We extend this to the most general case. In the process, we discover novel tools for analyzing pseudo-elliptic operators on dense schemes of the $9.52$-dimensional lens space. We also point to some possible connections to deep learning.
PERSHING: An Automatic Place-and-Route Tool for Minecraft Redstone Circuits
Quan Nguyen
In Minecraft, circuits can be assembled from redstone components. We introduce PERSHING, a place and route tool that can transform a gate-level netlist into logic gates built of redstone in Minecraft, without the need for external Minecraft mods. We discuss challenges, such as vertical signal transmission, and the algorithms used to perform placement and routing. We show preliminary results, including completed layouts, note future paths to explore, and conclude.
A Data Driven Analysis Framework and Erotica Writing Assistant
Sara Achour
Since the advent of the internet, there have been an increasing numbers user-curated pornographic fanfiction and erotic narratives. With the rising popularity of erotic fiction on digital book readers and on the silver and movie screens, the corpus of erotica is an increasingly valuable and fascinating, dataset. We present (1) \tx{xxxwriter}, an erotica writing assistant (2) \tx{tagmage}, a tag suggestion engine and (3) \tx{triplex}, an erotic fiction dataset. We additionally perform a detailed analysis of trends in both the metadata and prose present in the \tx{triplex} dataset.
50,000,000,000 Instructions Per Second : Design and Implementation of a 256-Core BrainFuck Computer
S. Jun
This paper describes the design and implementation of a machine for servicing a large number of BrainFuck execution instances. The design includes the BrainFuck manycore processor and the soft- ware daemon that manages the execution instances on the proces- sor. The BrainFuck manycore processor is a 256 core processor implementing a slightly modified version of BrainFuck as its ISA. Each core is implemented with a two stage pipeline with a carefully designed ISA encoding operating on disjoint memory spaces. Each core achieves over 0.8 instructions per cycle. We have implemented our design with an x86 host server and a Xilix Vertex 7 FPGA, and achieved an agglomerate performance over 50 billion instructions per second, which is light years ahead of what any single general purpose processor is capable of achieving while executing Brain- Fuck code. The BrainFuck computer is an attractive solution for servicing high throughput BrainFuck cloud services, both in terms of performance and cost.

2017

OneHunnit: Exactly One Hundred Megabytes of Academic Work [PDF 100M]
Ilia Lebedev
With OneHunnit, this manuscript introduces a novel body of inherently useful scientific work totaling precisely one hundred megabytes (100MB) in size, working under an assumption of an ISO 32000-1 (PDF) manuscript, submitted via an Apache-hosted HotCRP (Hot Crap) content management system. The methodology used to obtain this result generalizes easily, allowing for manuscripts of arbitrary file size, with a minimum of $u_{doc}$ - the minimum achievable file size of the desired manuscript, where $u_{doc}$ is small (but highly variable and dependent on the document content). This work includes a thorough evaluation of OneHunnit, and discusses several alternate approaches to the task of increasing manuscript file size.
Stochastic Caching for Read Optimized Eventually Consistent Datastores
Jonathan Behrens
In this work we describe a new cache mechanism to radically improve read performance of an eventually consistent database without sacrificing consistency guarantees and actually resulting in improved availability. Our system scales quadratically in the number of nodes and can support many hundreds of concurrent connections. Early results also suggest that bandwidth utilization is competitive with existing solutions and aligned to the capabilities of existing 2G cellular networks.
Locality Insensitive Hashing: Towards Ignoring the Curse of Dimensionality
D. Blalock, Bruce Wayne
The task of retrieving points from a collection based on a query is foundational in machine learning and data mining. Almost all existing work focuses on retrieving points that are similar to the query under some similarity or distance measure. However, virtually no work has been done on retrieving points that have little or nothing to do with the query.
We describe a search algorithm that returns such points with high probability in time independent of the dataset size. By leveraging a simple randomized algorithm, we are able to return irrelevant results on real and synthetic datasets orders of magnitude faster than existing techniques. Further, our approach is insensitive to the dimensionality of points and demands no preprocessing or space overhead.
BurgerMax Presents: Open Adcess
Brett Boston, Max Dunitz
In this paper we examine an alternate funding model for open access papers. Through modest modifications to the ACM Digital Library and the addition of paper sponsorships we can make the entire corpus of ACM works free to the public. We also peer into a chilling future that only we can prevent. Will we stop the ACM from encroaching on our everyday freedoms? Continue existing and participating in reality to find out!
Fortune Cookies as a Source of Entropy - A Qualitative Analysis
E. Rahn
Abstract Entropy, as used in computing, is a source of random information collected from various sources. This randomness is subject to various measurements of quality and is a measurable resource that can be depleted by using it up. Coming up with sources of high quality entropy under non-standard circumstances is a research question of interest within the security community. This paper explores an unconventional source of entropy; the usage of so-called ”lucky numbers” on fortune cookies as a source of entropy. Studies are conducted on both the cost of this source compared to more conventional sources as well as the quality of random numbers generated.
A Pizza My Mind
W. Boag, A. Larochelle
Alfredo's Italian Kitchen. Brothers Pizza. Papa Gino's. ABC Pizza. What do these all have in common? They are pizza stores. For whatever reason, the phrase "pizza store'' seems to inspire such vitrol among my so-called "friends''. "That's not a real phrase'' they always snidely remark. Oh yeah? Well I'm the one with a 3.8-billion-word dump of English Wikipedia! I decided to run some experiments to see just how "not a thing'' my very normal phrase is. How frequently did a few common pizza phrases occur? My experiments suggest that although synonymous phrases might be modestly more popular, many of the acceptable phrases are all within the same order of magnitude. In addition, a popular internet search engine does an arguably better job with understanding "pizza store' than it does "pizza place''. In other words, my friends are jerks.
Optimum Space Complexity Lower Bound Achieved by Quantum Pong Circuit
Michael Coulombe
PongTM is a classic game that has been recreated numerous times, but since it was discovered 40 years ago, the space complexity of a quantum computer implementation of Pong has remained an open question. Disproving the short-lived conjecture that it requires O(wh) qubits for an w x h playing field, QONG is a quantum circuit that implements Pong using O(log w h^3) qubits to store the game-state and zero scratch qubits, bringing the upper bound on the minimum number of qubits required to the information-theoretic space lower bound as well as only taking polylog time per step. QONG was also implemented in Javascript to play.
Fantasy Fantasy Football
Willie Boag, Lisa Conyer
Fantasy Football is America's favorite pretend pastime. Friend groups of all varieties play , though we all have that asshole friend Jason that never sets his lineup. It is interesting that although most fantasy football platforms allow for computer-aided "auto-draft", there is little-to-no support for "auto-manage my entire team all season." Such a notion -- bots playing fantasy -- is a great idea, and I'll tell you why. One can create an entire league of bots playing against one another using different policies. In fact, why stop at one league? Let's create thousands of leagues of bots, and then draft those bots. It'll feel just like we're playing fantasy football, but without all of the undesirable overhead; we'll be playing *fantasy* fantasy football!
Evaluation of MapReduce-Style Computation on a Cluster of Arduinos
S. Jun
Not really. This work involved hands-on implementation. See Figure 2.
Complexity of Computerized Turnstiles
Oscar Hernandez
Are turnstiles computationally hard? Pretty much.

2018

Wikipedia’s Bias Against the US Supreme Court
Willie Boag
In such politically tumultuous times, many turn to Wikipedia for objective news. Unfortunately, Wikipedia has clear biases, which we show here.
Incremental Improvements on the Placement and Routing of Minecraft Redstone Circuits
Quan Nguyen
We present Dewey, the successor to the PERSHING place-and-route tool for Minecraft Redstone circuits. As a performance-oriented rewriting of PERSHING from the ground-up in C, Dewey implements greatly improved routines for standard cell placement and routing, and leaks a ton of memory. However, we materialize a speedup of up to 34.0x over previous work. Dewey shows tremendous strides towards the placement and routing of entire computer processors in Minecraft.
PolyLU: The Over-Powered Activation Function
Anonymous
We present PolyLU, the polynomial linear unit, as an activation function with optimal performance characteristics, outperforming ReLU when applied to deep models. The celebrated ReLU activation function eliminates the vanishing gradient; our work takes this a step further, culminating in the field's first trainable activation function that is also resistant to adversarial attacks. Experiments demonstrate the potential of PolyLU and support a mathematical analysis of PolyLU's optimality.
Good Dogs of Cambridge
Willie Boag
The city of Cambridge is home to over 105,000 residents, yet there are only barely registered 5,000 dogs. With more than a 20-to-1 ratio of people-to-dogs, it is now more important than ever that we use big data in order to automatically identify where to find the best dogs. Because Cambridge is really h*ckin big, it can be incredibly costly to go to a neighborhood that lacks sufficient good dogs, only to realize you need to go elsewhere if you wanna be part of a high-quality belly rub. In this work, we propose a method for identifying which parts of Cambridge can maximize one's time with some big ole puppers.
“We need more free food!”: Free Food and Excellent Performance of Academic Institutes
Soya Park, Nikhil Vyas
Many tech companies nowadays provide free food to their employees as part of employee welfare program. Previous research reveals free food at companies contributes building a bond between employees and increasing productivity. However, not much research on free food has been done at academy environment. Through three weeks of the field experiment, we examine how experiment participants find out and ingest free food in depth. We report how the participants with different personality are managed to survive with free food. We also observe behavior change in participants by conducting post-interviews. We foresee our work aids Ph.D. students who are generally broke and motivates university administrators to provide more free food for the success of their students.

2019

TCP Over Email
Venkat Arun, Akshay Narayan, and Mercury
In response to recent trends in e-mail usage patterns, we propose "Reliable e-mail", or remail. remail implements reliable delivery of email messages end-to-end, in accordance with the end-to-end principle. Thisis a timely contribution, in light of increasing software complexity, and Software 2.0
Human Baseline for Zero-Shot Transfer Learning of Good Dogs
William Boag, Shashank Srikant
It is estimated that there are 900 million dogs in the world. That is a lot of dogs, and we assure you that each one is a good dog [2]. Unfortunately, we don’t have enough time to meet each dog de novo, and have inevitably needed to rely on word-of-mouth to learn about which dogs to meet and when. In this work, we demonstrate human-level performance for zero-shot dog recognition from features describedby other humans. Human performance is robust (>85% accuracy), even when presented with challenging comparisons. This accuracy is in the same ballpark as Karpathy et al.’s work on a human baseline for ImageNet [6]. We believe that this work will help future researchers develop AI-based tools for super-human performance on word-of-mouth-based human-dog introductions. From a neuroscience perspective, this work also establishes the presence of a seeming information barrier between the visual cortex and the language system of the human brain
Do NOT Modify Breadth-First Search
Michael Coulombe
This paper settles the conjecture that Breadth-First Search can solve the single source shortest paths problem on weighted graphs in linear time, by demonstrating counter-example graphs for which BFS and modifications thereof fail.
To Mississippi or Not To Mississippi: A Review of Colloquial Clocks
Dylan Cable, Austin Gadient, Aspen Hopkins, Divya Shanmugam, Dominique Thompson
We present the first study of informal counting, or the linguistic approximation of one second. For years, we have trusted the word "Mississippi" to correspond with a single second. It is past time to vet this choice and propose a path forward. First, we quantify how imprecise the word 'Mississippi' is. Second, we evaluate alternatives across a random sample of 24 people (read: our friends). We conclude with recommendations going forward.
Successful Task and Motion Planning with Hindsight Experience Reinterpretation

In this paper we describe an approach for improving the performance of robotic Task and Motion Planning (TAMP) systems on complex, long-horizon manipulation problems. In an uncertain environment, unexpected situations are all but guaranteed to arise, causing the robot to risk failure to plan or complete a task. HERinterpret allows the robot to reinterpret its initial goal under uncertainty, claiming a more believable goal given the current state. We show how our Deep Reframing Net allows the robot to reinterpret challenging goals into more achievable ones, resulting in a 100%success rate on a variety of TAMP problems. We also propose alternative planning methods that do not require deep reframing but still achieve a 100% success rate through goal reinterpretation.
IRBot: InadvisableRelationshipBot
nyancy
Many graduate students struggle to deal emotionally with daily life stresses, ranging from overdue problem sets to not dying at sea. Additionally, computer scientists may feel more comfortable typing at a screen than engaging in human contact. Although therapy chatbots exist and in fact have reached millions of smartphone users, they run on remote servers, creating privacy concerns. In this work, we propose that a local chatbot can also provide useful advice and can reach the vulnerable sub-population of computer science grad students. We create InadvisableRelationshipBot (IRBot) using high-quality online commentary from www.reddit.com/r/relationships.

2020

Adaptive Mesh De-Resolution
Paul Zhang
The finite element method (FEM) is a triumph of modern engineering that allows us to computationally solve a plurality of partial differential equations (PDE) and physical system models. It would not be an overstatement to say this technique alone enables and supports the entire infrastructure of modern society. Despite its flexibility and ubiquity, the method comes at high computational cost: it seemingly requires denser meshes to decrease numerical error. In this paper we demonstrate that contrary to established hearsay, sparser meshes are actually what is needed to decrease numerical error. We introduce the adaptive mesh de-resolution technique which provably decreases all numerical error to exactly zero.