1. icon for News
  2. icon for TCHOW

News

Tuesday, August 11, 2020

Interactive vs Generational Collaboration

My research group has been writing a lot of papers recently. Like any reasonable set of CS folks, we use LaTeX as our typesetting tool of choice. Of course, like any modern set of collaborators, we also all want to be editing the paper at the same time, changing words as others are writing, and completing each-others sentences.

To accomplish this, we have generally turned to Overleaf, which -- like LaTeX itself -- isn't a great solution to the problem, but it is the best solution we have. One of the interesting things about Overleaf is that it supports two completely different models of collaboration, which -- for the purposes of this post and without reference to any external sources -- I'll term "interactive" and "generational".

Interactive Collaboration

Interactive collaboration models a shared workspace. All changes are made simultaneously and with fine-grained (real-time) synchronization. Interactive collaboration seems to work best when edit operations are "local" -- generally modeled by granting participants a notion of a cursor or selection around which their edits are centered.

Interactive collaboration doesn't work very well with dependent or global edits (e.g., trying to make an overarching change to the order of sections). And -- as often happens on Overleaf -- it really doesn't work well when the synchronization system begins to bog down.

Generational Collaboration

Generational collaboration models... well, I'm not exactly sure what it models. Perhaps a series of independent artifacts, individually crafted, but with reference to other previously created artifacts? Regardless, changes are made individually, and merged non-real-time into a final artifact.

Generational collaboration works well for changes that don't overlap and (thus) can be merged automatically. It is, thus, a natural fit for global but distinct changes (e.g., changing all references to a figure; doing a global re-ordering of paragraphs in a the "Results" section while another collaborator works on the "Introduction" section).

Further, generational collaboration can operate in a decentralized and disconnected way, which are distinct advantages, especially on a paper deadline when nobody has time to wait on a slow server. (Or, e.g., on an airplane.)

Combining the Models

So, how do we combine the models to get the benefits of both?

Overleaf's approach is to support the generational collaboration model via git push/pull support, but to require that no interactive edits happen between a git pull and the subsequent git push. This, effectively, makes generational collaboration lower priority than interactive, making it incredibly awkward to leverage this model while others are using the interactive model.

My preference here would, instead, be to start with a decentralized generational model and build in some notion of interactivity. I tend to like disconnected/disjoint tools, so perhaps the way to do this is to have the notion of a multiplayer editor protocol that can work (optionally, in parallel) with a git repository.

Here's a sketch of how that might work:

When a multiplayer (text?) editor opens a file, it would check in the same folder (and parent folders) for a .multiplayer subdirectory and contained config file. (Similar to how git looks for a .git subdirectory.)

The .multiplayer config file would contain either the address of a responsible server and a project token to identify the project to that server, or a list of peers to contact directly. Any multiplayer-capable editor would -- after prompting the user -- get the list of peers to coordinate editing with (possibly via connecting to a server, possibly via direct peer entries). The server could also handle STUN to make peer-peer communications easier.

Regardless, said multiplayer-capable editor would now be connected to a peer cloud and could use peer-to-peer multicast techniques to keep the document synchronized, publish cursor state, and so on.

So that takes care of the interactive collaboration. How about the generational part? We could make it so the .multiplayer directory contains a reference to the most recent point in shared edit history at which an editor had synchronized to the peer cloud. If -- when your multiplayer editor connects -- your current file matches the recent version in the .multiplayer directory, then all changes between that version and the current peer cloud version could be automatically applied.

If, on the other hand, your current file differs from the reference version in the .multiplayer directory, then there's a generational change to merge. This is the classic "three-way merge" problem that (e.g.) git solves reasonably well. In the interactive setting, it might even make sense to allow you to request others stop for a second and help with the merge.

Of course, if your text editor of choice doesn't support multiplayer, you could still edit single-player and use a command-line utility to deal with the three-way merges. (Perhaps with a "hang on, let's do the merge together everyone" option to avoid the same problem that overleaf has.)

So, Who's Building It?

Perhaps the most surprising thing for me in my (scant) research around this topic is that nobody has built an interoperable interactive collaboration system that actually supports, e.g., cross-editor cursor sharing; at least not one that is still in action today (I seem to recall that google docs at some point had an API with cursors).

Perhaps the notion of a cursor is so editor-specific that it takes special sauce to do it. Perhaps the idea that you need to run a server makes people want to lock in users to finance said server. And perhaps I just haven't looked hard enough.

In the latter case, let me know! I'd love to try out something different with my group when we next have a big crop of papers ready to go.

Tuesday, July 28, 2020

Puzzle Nutrients and The Cubedex of Brass and Wood

The Cubedex Series of games were each designed as virtual puzzle hunt replacements. As such, they have a particular flavor that grew out of my puzzle hunt design strategies. In this post, I will describe what I look for in a cubedex game puzzle, and talk about how these relate to The Cubedex of Brass and Wood, which is forthcoming on Steam (indeed, there is a demo there now).

The Puzzle Nutrients

I tend to think of puzzle-hunt style puzzles as having a few different archetypes. In reality, perhaps they are more like ingredients -- in that they can be mixed and complementary -- or like nutrients, in that a healthy puzzle hunt should contain a balance of all of them.

They are, in no particular order:

Hidden Puzzles

Hidden puzzles present the player with an unknown artifact and ask them to understand the artifact in order to solve it. These are the bread-and-butter of puzzle hunts, and require a sort of logical twist wherein the player must hypothesize rules under which the puzzle might contain information and extract that information. Once the proper rules are found, the puzzle becomes simple to decode, and provides some sort of structure that convinces the player that their decoding is correct.

The simplest hidden puzzles are those that present information in a known encoding scheme -- like resistor color code, semaphore, or ASCII, to name a few -- lightly obscured by some sort of transformation (e.g., written in letters which are mirrored). More interesting encoding puzzles may present physical artifacts that need to be manipulated to produce the decoding effect (e.g., a literal decoder ring, a pattern that must be viewed through a color filter, or -- in one of my favorite examples -- a set of paper tiles, only some of which were treated with a fire-retardant chemical).

A prototypical encoding puzzle in The Cubedex of Brass and Wood is The Clock, which combines a decoder-ring-like device with another encoding scheme.

Combinatorial Puzzles

Combinatorial puzzles challenge the player to find a satisfactory solution from a large solution space. The solution space is generally abstract and generally contains very few satisfactory solutions. This makes brute force search infeasible (or, at least, tedious). However, in contrast to hidden puzzles, the quality of a solution isn't ever in doubt.

Typical puzzles of this sort include physical put-together puzzles (e.g., build a cube from these tetrominoes, fill a bird silhouette with these tangrams) and constraint-satisfaction puzzles like Sudoku. In The Cubedex of Brass and Wood, the prototypical combinatorial puzzle is probably The Balance, in which the player must fill a tray with masses so that it balances.

This puzzle type generally requires one to either constrain the search space by developing a higher-level understanding of the puzzle, or to write a computer program that searches and checks solutions for you. I think both of these solution methods can be really satisfying; indeed, sometimes they can be combined to produce a really, really satisfying solution -- like realizing that "Lights Out" can be solved by using linear algebra over finite fields, or that state equivalence via move reversibility allows for the creation of more efficient Sokoban solvers.

Performance Puzzles

Performance puzzles ask the player to interact with an artifact in order to manipulate it into a good state. Performance puzzles mix well with hidden puzzles and combinatorial puzzles, adding a layer of reflex- or precision-based challenge to the existing puzzle.

I think that performance puzzles are often neglected in a puzzle hunt context because they can be hard to run for a large group in person (for all that they can also be spectacularly fun). Virtual performance puzzles really open up more options in this regard.

The Cubedex of Brass and Wood's performance puzzle is The Maze, which requires reasonably fine control of the angles of mirrors to direct a beam of light to enable walls to be moved. The version of this puzzle you get in the retail game is actually a toned-down version relative to the original; in the version you'll be able to play, you get to trigger wall movement manually. In the original, wall movement was automatically triggered and any stray hand motion could cascade and ruin the entire maze setup. My testers found this incredibly frustrating, so I changed it.

The Machine is another Cubedex of Brass and Wood puzzle with performance elements, though it is in some ways also a hidden puzzle, given that the interaction rules are not clear. (Arguably, for some variants, it is also a combinatorial puzzle, as at least one tester [threatened to] use Z3 to extract a logical formula consistent with the device's behavior during their solution attempt.)

Building Puzzles

A building puzzle asks the player to assemble a device to overcome a challenge. Like a combinatorial puzzle, the building blocks and goal are generally reasonably straightforward. However, the solution often relies on a [simulated] physical task, rather than a discrete measure of correctness; and the solution space is generally large and open-ended.

Good examples of physical puzzle hunt building puzzles include building a bridge over an obstacle, designing a Lego gear box with a specific ratio, or building an autonomous vehicle that can cross a specific terrain. I use that last idea in The Cubedex of Brass and Wood in The Cart, where the player builds a line following robot using real[-ish] vacuum tube logic components.

One of the fun things about building puzzles is that the players can often design "cheesy" solutions that just barely work; The Cart is not immune to these sorts of solutions, but I think I've included at least one variant of the puzzle that will be hard to solve in an unintended way (prove me wrong! -- it's always fun seeing how these things can be broken).

A Balanced Puzzle Diet

These are the four basic puzzle nutrients that I think about when building a puzzle hunt or game. They can serve as a prompt to make sure the hunt is balanced, or as a lens to look through when attempting to solve a puzzle.

As I continue to polish and release the Cubedex Series, I hope you will look out for these puzzle types in the games. Every game aspires to be a complete meal, but they certainly have different balances of nutrients within them (e.g., The Cubedex of Boxes and Lines is almost entirely hidden puzzles with a performance element).

Sunday, April 9, 2017

Who Wins (Simple) Gwent?

So, a few weeks back, I played some of the beta of Gwent, an online two-player deck-building card game. This post analyzes a very simple version of the game -- one where the cards have value but no text, and both players always draw the exact same cards -- and shows that in this simple version, perfect play results in a tie.

Basic Rules

Let's start with the rules.

Gwent is a two-player card-based point-accumulating game played over three rounds. Players begin the game by drawing 10 cards, of which they may redraw three. During each round, players take turns playing cards. Each card changes the players' points totals. If a player doesn't want to play any more cards during the round they can "pass". The round is over when both players pass.

The player with the higher point total at the end of the round is the winner. In the case of ties, both players are considered to have won. If either player has reached two wins, that player wins the game. If both players win simultaneously, the game is considered a draw. If neither player has reached two wins yet, another round is played.

Points totals are reset at the end of each round. Players replenish their hands by drawing two cards after the first round and one card after the second round.

The first player for the first round is determined randomly; for subsequent rounds, it is the player that won the previous round. I forget what happens in the case of ties, but it's not crucial for the analysis below. (I think that the lead switches.)

Before the game begins, each player constructs a deck of 25 (or more) cards. These decks are where each player draws their cards from. The contents of this deck is limited in various ways that I won't discuss, since part of the conceit of this post is that we're going to ignore the existence of decks and assume each player draws the same cards.

This treatment of the rules also neglects discussions of Factions, Leaders, and Card Text. Briefly, each player must choose a Faction when building their deck. This Faction limits which cards may be used in the deck, and may provide a global ability [e.g. "redraw an extra card at the start of each round"]. Each player must also choose a Leader. A Leader is essentially an extra card that is always drawn at the start of the game. Finally, each card may have complicated Card Text which specifies how it interacts with the point totals and other game state. For the discussion that follows, none of these things will be considered.

Simple Gwent: Provably Boring

In "Simple Gwent", which I will talk about for the rest of the post, all cards have a positive point value, each player starts with the same hand, and both players will draw the same cards between rounds.

It turns out that Simple Gwent always ends in a draw if played competently. Specifically, I will demonstrate that each player can force a draw, regardless of what the other player does. I break this strategy into four cases, depending on the state of the game. In each case, the players are labelled F and S, "first to act" and "second to act". Note that labels may change between rounds.

Case Zero-S: Score 0-0, Same Cards; S to draw/win

In the first round, the second player can guarantee a draw by playing a simple strategy:

  • Play whatever F just played.

This always results in a draw with identical cards remaining, at which point player S uses strategy One-F or One-S to force a draw/win (I don't recall if lead switches on a draw).

Case Zero-F: Score 0-0, Same Cards; F to draw/win

In the first round, the first player can guarantee a draw by playing a very basic strategy:

  • Pass.

This results in one of two outcomes:

  1. Player S plays at least one card and wins; in which case Player F uses strategy Behind-S next round.
  2. Player S passes, which results in a draw, in which case Player F uses strategy One-F or One-S (depending on if the lead switches in a draw) next round.

Case Behind-S: 1-0 (F winning), S Has Card Advantage; S to draw/win

In the second round, when S has a strict superset of F's cards (that is, S has all the cards F has, and at least one additional card), S can force a win/draw by playing as follows:

  • If F plays, play the same card.
  • Once F passes, play the cards in your hand that F does not have in their hand, then pass.

This guarantees a win for S, bringing the score to a 1-1 tie, and making both player's hands the same. At this point, S uses strategy One-F to guarantee a win/tie.

Case One-F: Score 1-1, Same Cards; F to draw/win

In a round where both players have already won and have the same cards, the first player can guarantee at least a draw by following this strategy:

  • Play all cards.

This results in one of two outcomes, depending on what player S does:

  1. If player S plays all their cards, resulting in a tied match and game.
  2. If player S does not play all their cards, and player F wins the match and game.

Case One-S: Score 1-1, Same Cards; S to draw/win

In a round where both players have already won and have the same cards, the second player can guarantee at least a draw by using this strategy:

  • Play all cards.

This results in one of two outcomes, depending on what player F does:

  1. If player F plays all their cards, resulting in a tied match and game.
  2. If player F does not play all their cards, and player F wins the match and game.

Conclusions and Notes

The above demonstrates that Simple Gwent is boring whenever both players draw the same hand and know it. In other words, [full] Gwent is very much a deck-building game -- your basic goal as a player is to make sure that you end up with a better draw than your opponent. Of course, this also suggests that it would be interesting to figure out just what a good draw looks like (obviously, it should be high-scoring; but perhaps it is more important that it have two [three?] high-scoring subsets).

Also, the "Pass First" strategy seems really weird in this context -- shouldn't F take the opportunity to get an early win? Not always! Consider a simplified scenario where players start with one card of value a, draw one card of value b < a after the first round, and a card of value c after the second.

  • Round 1: First player has {a}, second player has {a}: First player plays a, second player passes. First player wins.
  • (players draw b)
  • Round 2: First player has {b}, second player has {a, b}. Then, either:
    • If first player passes, second player plays b (winning), then wins the last round because a + c > b + c.
    • If first player plays b, then second player plays a (winning), then wins the last round because c + b > c

In other words, playing first in simple Gwent can lose you the game. Maybe there's a take-away there for real Gwent (though I think some of the card text is written to explicitly mitigate this problem).

Finally, I should note that in the process of coming up with this reasonably simple proof, I also wrote some reasonably less simple code to explore Simple Gwent game trees and visualize game states. It's not particularly complete, since I ended up not using it for the final post. But if you're interested, you can check it out; it prints some things to the console and also shows a reasonably-nicely styled game state, but mostly it's some half-baked hacks.

Wednesday, February 22, 2017

Come At Me Slow

I can't very well mention last year's New Year's album and not point out that the 2016/2017 album is also out. It's called "Come At Me Slow" and you can listen below, on SoundCloud, or on Spotify.

The standouts for me this year are the amazing cover art by Kathleen and the epic sounds of the title track. It is with some regret that I note that I still haven't been awarded the steam achievement described by "Hate Plus / Love Note", despite sending a verification photo and e'rything.

The New Year We Missed

With the crush of the SIGGRAPH deadline, I failed to both create a New Year's card or even make a post detailing the Jimike New Year's album for 2015/2016. So let's fix that right now, shall we?

Grid Line

My remedial New Year's card for last year is titled "Grid Line", and that about sums it up. The grid gets interesting half-way through. Play here (works on smartphones, too!).

I think this might also be my first New Year's card with sound effects (though those don't work on phones; sorry).

36-Hour Day

Last year's album was titled 36-Hour day, since we were producing it in such widely separated time zones. You can listen on SoundCloud (widget below) or Spotify

Standout tracks for me are "Dear Snails" and "The Only Fi I Have" both of which have honest, if shallow, lyrics.

Tuesday, May 17, 2016

Convex Horizons

Today, I started thinking about the complexity of a certain geometric problem: finding the horizon of a set of convex objects. In this post, I'll first state the problem and then attempt to explain why this is an interesting problem to think about.

The Convex Horizon Problem

Given a list of 2D convex objects, return a function h that maps each x-coordinate to the maximum y-coordinate such that (x, h(x)) is contained in some object. This is the "horizon" or "skyline" of the objects.

First Reduction: when working with convex polygons, this is equivalent to finding the horizon of a series of line segments; or, equivalently, trapezoids with bottom edges at negative infinity. So this is the version of the problem I'll be thinking of:

(The horizon of the many-hued trapezoids is the thick black polyline.)

Thoughts on Complexity

Any algorithm that computes such a horizon must spend time at least linear in the number of output points in the horizon line. So it's interesting to think about how many points this could be. Particularly, I find this intriguing because of two different (related) problems.

The Box Horizon problem pretty clearly has output of size at most 4N (for N boxes). The argument goes something like: the horizon is a chain of line segments alternating between flat segments and vertical segments. The vertical segments correspond to the vertical edges of boxes, and each box edge appears at most once in the list. This means there are at most 2N vertical segments, which must (therefore) separate at most 2N horizontal segments.

(Mind you, the time complexity of an algorithm that computes a box horizon from an unsorted set of boxes is likely O(N log N), given as reducing sorting to box horizon seems likely to be trivial.)

The logic used to argue for the output complexity of box horizon breaks for trapezoid horizon because the topmost trapezoid can switch not just at an edge, but at a line segment intersection. And it's easy to see that there can be quadratically many intersections among lines:

But this standard lines example doesn't have a clear translation to the trapezoid horizon setting! So that leaves an interesting puzzle: is there some other argument that can be used to demonstrate linear output complexity of trapezoid horizon? Or, failing that, what's a super-linear output complexity example for this setting?

I plan to ponder this more over the next few days. If you have some ideas, feel free to get in touch.

Wednesday, April 1, 2015

The Rktcr Benchmark

In preparing Rktcr for release on Steam, I've been making some minor adjustments to the code and art (Steam Achievements! Blinking characters!). As I've made a bit of a fuss about before, Rktcr is a game that plays exactly the same across three compiler/OS combinations (g++/Debian, clang++/OSX, cl.exe/Windows). As such, part of any update is confirming determinism with an extensive test suite.

Specifically, I have the game compute SHA1 hashes of relevant state as it plays itself on a hand-crafted set of ~1,600 par time paths. Yes, there are over 1,600 individual paths segments you might want to play in Rktcr, and I have a plan to add cheat-proof leaderboards for all of those. But that's a topic for another post.

The Benchmark

So how do my Windows and Linux partitions stack up against each-other on this (blended filesystem/compiler optimization) benchmark? (I'm not testing clang++/OSX since that's not installed on the same machine.)

OSCompilerTime
WindowsVS2013, Update 3120 seconds
WindowsVS2013, Update 4121 seconds
WindowsVS2013, Update 4, optimizations off137 seconds
Linuxg++ 4.8.179 seconds
Linuxg++ 4.9.276 seconds

I'm pretty sure that the one-second difference between VS updates is within noise, though I dread that it may be a performance regression. Turning optimizations off demonstrates that they are at least doing something.

Under Linux, there is a noticeable improvement between the steam-runtime-targeting g++ 4.8 and my system-wide g++ 4.9; whether it's a compiler or standard library difference isn't clear; but it is certainly nice when newer compilers make old code faster.

Finally, I'm surprised that the Windows/VS version is doing so poorly compared to Debian/g++ -- my previous experience has been that cl.exe produces code that is a fair bit faster than either g++ or clang++. I'm left wondering if this particular code is something that cl.exe fails to optimize properly, or if the Windows stack is failing somewhere else (filesystem read performance? standard library implementation?). There could even be exotic hardware causes -- Windows and Linux are run from different [though identically-branded, purchased-at-the-same-time] SSDs -- perhaps one is slower.