Advent Of Code 2017
This year I discovered the advent of code website for the first time. I decided to get on with it, because I've always been terrible at programming challenges and I wanted to see how I could fare. I decided to use Erlang whenever possible to save time, and also turn this into my second blog post of the year (if you're wondering, I've been busy writing propertesting.com, and then started work to turn it into a real book with Pragmatic Programmers).
Day 1
Part 1
The first problem asks to find the sum of all digits that are duplicates in a string:
 1122 produces a sum of 3 (1 + 2) because the first digit (1) matches the second digit and the third digit (2) matches the fourth digit.
 1111 produces 4 because each digit (all 1) matches the next.
 1234 produces 0 because no digit matches the next.
 91212129 produces 9 because the only digit that matches the next one is the last digit, 9.
My solution to this one is quite simple, and just uses a string translated to integers, adds the head to the tail of the list (to simulate the circular wrapparound), and then compares pairs of integers:
day1_p1(String) > Digits = [ Char$0  Char < String], CircularDigits = Digits ++ [hd(Digits)], % add the head last to go around day1_p1_(CircularDigits). day1_p1_([]) > 0; day1_p1_([C,CT]) > C + day1_p1_([CT]); day1_p1_([_T]) > day1_p1_(T).
That worked fine.
Part 2
The challenge is changed so that you now have to consider the digit halfway across the list:
 1212 produces 6: the list contains 4 items, and all four digits match the digit 2 items ahead.
 1221 produces 0, because every comparison is between a 1 and a 2.
 123425 produces 4, because both 2s match each other, but no other digit has a match.
 123123 produces 12.
 12131415 produces 4.
This forces us to change the approach. What we need now is the ability to jump randomly through the list of digits to find its matching pair; if the values are the same we count it; if not we skip it and check the next one.
There are two terms useful for this: a tuple, or a binary. Both support O(1) random access. Fortunately, the problem at hand requires us to only use digits from 0 to 9, and so we know everything can fit in the bytes of a binary without messing up alignment:
day1_p2(String) > Digits = << <<(Char$0)>>  Char < String >>, Size = byte_size(Digits), day1_p2_(Size1, Size, Size div 2, Digits). day1_p2_(1, _Size, _Jump, _Bin) > 0; day1_p2_(Pos, Size, Jump, Bin) > Current = binary:at(Bin, Pos), Pair = binary:at(Bin, (Pos+Jump) rem Size), % modulo to wrap Add = if Current =:= Pair > Pair ; Current =/= Pair > 0 end, Add + day1_p2_(Pos1, Size, Jump, Bin).
The call to the second function is a bit odd, using 4 values: the current position in the binary (starting from the end of it), the size of the binary value itself (byte_size(Bin)
is O(1) as well, but I'll cache what I can), the value of the jump required to find the next pair (half the list size), and finally the binary itself.
By using a binary, we can access the integers as if they were in an array in most imperative languages—although the cost of modification would still be high, reading is fine—and get the answer we want in no time.
Day 2
Part 1
This challenge asks us to break up uneven rows of numbers into the sum of the difference between their respective largest and smallest numbers:
5 1 9 5 7 5 3 2 4 6 8
 The first row's largest and smallest values are 9 and 1, and their difference is 8.
 The second row's largest and smallest values are 7 and 3, and their difference is 4.
 The third row's difference is 6.
In this example, the spreadsheet's checksum would be 8 + 4 + 6 = 18.
The trickiest bit is just converting the string into the proper structure, which can be done by repetitively splitting it: rows are delimited by linebreaks, and then whitespace will delimitate columns within each rows. Then, each substring can be converted to an integer:
day2_p1(String) > Rows = [[list_to_integer(X) % convert values  X < string:lexemes(Row, "\s\t")] % break up columns  Row < string:lexemes(String, "\n")], % break up lines lists:sum([lists:max(Row)  lists:min(Row)  Row < Rows]).
The last line does the calculation required by the function. It would have been faster to fetch both the minimum and max value in a single pass and then keep the accumulated sum of all rows in a single pass, but the test input is small enough that this does not matter.
Part 2
The second variation tells us that each row contains only two numbers that evenly divide each other. The result of that division should be added for each row.
5 9 2 8 9 4 7 3 3 8 6 5
 In the first row, the only two numbers that evenly divide are 8 and 2; the result of this division is 4.
 In the second row, the two numbers are 9 and 3; the result is 3.
 In the third row, the result is 2.
The sum of the results would be 4 + 3 + 2 = 9.
Now that one is a bit more annoying because the naive approach forces us to do an O(N^2) sequence on each row since we'll compare each value to each other value on the line:
day2_p2(String) > Rows = [[list_to_integer(X) % convert values  X < string:lexemes(Row, "\s\t")] % break up columns  Row < string:lexemes(String, "\n")], % break up lines lists:sum([X div Y  Row < Rows, X < Row, Y < Row, 0 == X rem Y, X =/= Y]).
The first 3 lines remain the same. Only the sum changes. You can see that I just do the bruteforce approach, while checking that the two values are not the same. Note that this is wrong: if a row of numbers contained say '5 2 3 5', then 5 would divide 5 evenly and the result would be 1, whereas my function would return 0. I decided to try it anyway and my answer worked for the input I had.
A better approach would need to iterate over the list and return the first result found. Another thing is that as soon as the first number is checked, it can be dropped from further iterations as it already had all its comparisons. As you progress forward in the list, you lower the number of iterations done within each row:
day2_p2_1(String) > Rows = [[list_to_integer(X) % convert values  X < string:lexemes(Row, "\s\t")] % break up columns  Row < string:lexemes(String, "\n")], % break up lines lists:sum([catch divisor_result(Row)  Row < Rows]). divisor_result([]) > 0; divisor_result([HT]) > divisor_result(H, T), divisor_result(T). divisor_result(_, []) > 0; divisor_result(X, [Y_]) when X rem Y =:= 0 > throw(X div Y); divisor_result(X, [Y_]) when Y rem X =:= 0 > throw(Y div X); divisor_result(X, [_T]) > divisor_result(X, T).
The result for that function is far more reliable, and also ~30% faster.
Day 3
Part 1
The first challenge of the day presents you with a grid infinitely filled as a spiral:
37 36 35 34 33 32 31 38 17 16 15 14 13 30 39 18 5 4 3 12 29 40 19 6 1 2 11 28 41 20 7 8 9 10 27 42 21 22 23 24 25 26 43 44 45 46 47 48 49
The objective is to find how many moves a given number is from going back to 1:
 Data from square 1 is carried 0 steps, since it's at the access port.
 Data from square 12 is carried 3 steps, such as: down, left, left.
 Data from square 23 is carried only 2 steps: up twice.
 Data from square 1024 must be carried 31 steps
The trick for that one is that each 'ring' counts for 1 move, as long as you can make it to the central point of any side. As such, to solve the problem we need to find the Nth ring we're at, and add that to the number of steps it takes to get to the middle of a side of that ring.
So let's start by finding which ring we're at.
Finding the number of entries seen so far within a ring is the sequence:
1 => 9 => 25 => 49 => 81 1x1 => 3x3 => 5x5 => 7x7 => 9x9
The trick is finding how to get that sequence of roots (1,3,5,7,9,...). It would be easy to just iterate, but since we'll count rings, it's nicer to find this relationship that lets us get a value directly:
0 1 2 3 4 0*2+1 => 1*2+1 => 2*2+1 => 3*2+1 => 4*2+1
Squaring that value will give us the result. We can put that code into a function that will search for a value N
and find the ring (starting from 0) in which it belongs:
ring(N,Ring) > %% Iterate through each ring as a square Square = square(Ring), if N =< Square > Ring ; N > Square > ring(N,Ring+1) end. square(Ring) > Root = 2*Ring+1, Root*Root.
The next step is to find how many steps are needed to go from any position within a ring to a center point of a side (since this center position then lets us just do one hop per ring to reach the center).
The number of entries per side at each ring size is unsurprisingly (just count them):
1 => 2 => 4 => 6 => 8
Which is simple to represent in Erlang:
side_len(0) > 1; side_len(Ring) > 2*Ring.
The side position is a bit trickier to find: we have to figure out the central point of a side, and then figure out which central point we're closest to (alternatively: find which side you're on and then count the steps to the center point:
side_pos(N, Ring) > RingStart = square(Ring1)+1, % +1 move from end of last to start of current SideLen = side_len(Ring) SideMid = SideLen div 2, % halfway point of a side %% midpoints of 4 sides Points = [RingStart+(Side*SideLen)+SideMid1  Side < [0,1,2,3]], %% closest midpoint lists:min([abs(PointN)  Point < Points]).
The function works by finding the first number of the current ring, and then calculating the 4 midpoints on that ring. Then the difference between a center point and the number we have represents the steps to take to reach that mid point; we keep the cheapest one as the optimal path. We can now tie it all together:
day3_p1(N) > %% Calculate which ring we're in Ring = ring(N,0), %% How many steps to ride to closest side center SidePos = side_pos(N, Ring), Ring + SidePos.
And this gives the right result.
Part 2
We now have the next problem, a fairly trickier one. We still work with a grid, but each square contains the value of all its neighbouring squares that are allocated:
147 142 133 122 59 304 5 4 2 57 330 10 1 1 54 351 11 23 25 26 362 747 806> ...
We start from the middle, then right, up, left, left, down, etc. We're asked to find the first value larger than any given number. The model here needs to change. We can't just know the size of the matrix at first and know what is in the middle, and I found no super simple way to just flat out estimate the position of the number I'm looking for. It seems the simplest way is too build the grid up to the point we need it, using relative positions.
We start at {0,0}
. A move to the right is at {1,0}
, and a move to the bottom would be {0,1}
. Here's 4 functions to handle it:
right({X,Y}) > {X+1,Y}. left({X,Y}) > {X1,Y}. top({X,Y}) > {X,Y+1}. down({X,Y}) > {X,Y1}.
These functions are composable. The next function I wrote was one to calculate the value of any square:
value(Pos,Map) > lists:sum([at(X, Map)  X < [top(left(Pos)), top(Pos), top(right(Pos)), left(Pos), right(Pos), down(left(Pos)), down(Pos), down(right(Pos))]]). at(Pos, Map) > maps:get(Pos, Map, 0).
Just sum them all, with a default of 0 for an unset value. Simple enough. To find the largest value possible, all we need to do is to repeatedly calculate the current position's value until we get one greater than what we're looking for:
day3_p2(Max) > Current = {0,0}, Map = #{Current => 1}, find_larger(Max, right(Current), Map). find_larger(Max, Pos, Map) > case value(Pos, Map) of Val when Val > Max > Val; Val > NextMap = Map#{Pos => Val}, find_larger(Max, next_pos(Pos,NextMap), NextMap) end.
The map is initialised with the central value at 1 and the first move shifting to the right. After that, a recursive search is conducted. All that's needed is a next_pos
call that can navigate the map. There's a simple set of rules we can apply:
next_pos(Pos, Map) > %% if left is filled but top is empty, go up Top = at(left(Pos), Map) =/= 0 andalso at(top(Pos), Map) =:= 0, %% if left is unfilled but down is filled, go left Left = at(left(Pos), Map) =:= 0 andalso at(down(Pos), Map) =/= 0, %% if down is unfilled but right is filled, go down Down = at(down(Pos), Map) =:= 0 andalso at(right(Pos), Map) =/= 0, %% if right is unfilled, go right Right = at(right(Pos), Map) =:= 0, if Top > top(Pos); Left > left(Pos); Down > down(Pos); Right > right(Pos) end.
This is one of the places where Erlang would really benefit from having a cond
expression, or if
blocks that allow arbitrary function calls. Nested case
expressions would be quite annoying here. I decided to just test them all and then pick the first one. It's not that expensive. With that code around, the grid can just cover itself entirely and return the value we need.
Day 4
Part 1
The challenge in this one was to get passphrases, and refuse those with repeated words:
 aa bb cc dd ee is valid.
 aa bb cc dd aa is not valid  the word aa appears more than once.
 aa bb cc dd aaa is valid  aa and aaa count as different words.
We're given a list of linedelimited passphrases and must return how many are valid:
day4_p1(String) > Passwords = string:lexemes(String, "\n"), length([Password  Password < Passwords, Words < [string:lexemes(Password, " ")], length(Words) == sets:size(sets:from_list(Words))]).
First line breaks lines up, second lines sets the list comprehension to iterate over passwords (we'll count the valid ones); third line is the tricky one, it breaks the passphrases into words, but since the string
call is wrapped into a list, the value of the call is assigned into Words
. The last line contains a filter that makes the expression valid only if there are no distinct values, since sets
will dedupe the list.
Part 2
Part 2 asks the exact same thing, with the caveat that words must not be anagrams of each other rather than dupes. We're not gonna calculate anagrams; instead we'll sort all characters in a word (we have no unicode in our input, so no need for collation or normalization) and if two words sort to the same value, sets will eliminate them:
day4_p2(String) > Passwords = string:lexemes(String, "\n"), length([Pass  Pass < Passwords, Parts < [[lists:sort(Word)  Word < string:lexemes(Password, " ")]], length(Parts) == sets:size(sets:from_list(Parts))]).
Aside from variable renaming, the only part that changed is the third line of the function, where all words of a passphrase get sorted before being compared later. This worked well.
Day 5
Part 1
This one is about randomly accessing and modifying array values, and is a bit reminiscent of implementing brainfuck interpreters. You're given input like:
0 3 0 1 3
Which is to be interpreted as a moving tape. Each number encountered specifies a number of 'jumps' to do to the right (or left if negative). After each jump, the current counter value is increased by one:

(0) 3 0 1 3
: before we have taken any steps. 
(1) 3 0 1 3
: jump with offset 0 (that is, don't jump at all). Fortunately, the instruction is then incremented to 1. 
2 (3) 0 1 3
: step forward because of the instruction we just modified. The first instruction is incremented again, now to 2. 
2 4 0 1 (3)
: jump all the way to the end; leave a 4 behind. 
2 (4) 0 1 2
: go back to where we just were; increment 3 to 2. 
2 5 0 1 2
: jump 4 steps forward, escaping the maze.
The objective is to give how many steps (or jumps) were required to go out of bounds.
Now this is intereting to do in Erlang, because we don't have an easy array of elements with O(1) access and mutability. We could use ETS or the process dictionary to get around it and get a very fast solution, but I didn't really feel like implementing that. One key factor here is that we always only mutate the current element of the list, the one we're on.
For this, we can use a zipper approach. A zipper for a list is basically taking a regular list and representing it as a {Previous, [Current  Next]}
structure. All the items seen are in the Previous
list, which appears backwards, and we're always on the Current
item, which is cheap to access and modify. Our skipping around will be a bit costly, but the cost of mutation would have eaten at us anyway, even if we had used maps or the array module:
day5_p1(String) > Offsets = [list_to_integer(N)  N < string:lexemes(String, "\n")], zip([], Offsets, 0). zip(_, [], Ct) > %% no instructions, we're done Ct; zip(Prev, [NNext], Ct) when N >= 0 > {Skipped, NewNext} = take(N, {Prev, [N+1Next]}), zip(Skipped, NewNext, Ct+1); zip(Prev, [NNext], Ct) when N < 0 > {NewNext, Skipped} = take(abs(N), {[N+1Next], Prev}), zip(Skipped, NewNext, Ct+1). take(0, Acc) > Acc; take(_, {_, []}=Acc) > Acc; take(N, {Prev, [HT]}) > take(N1, {[HPrev],T}).
Two critical functions here: zip
and take
. zip
's job is just to move in the correct direction while incrementing a counter. take
is about doing our 'jump' by moving around in the zipper. The code is always the same, with the only variation being whether we're moving left or right, and flipping the argument to take
to be in the correct order.
Part 2
Part 2 has only a tiny variation: after each jump, if the offset was three or more, instead decrease it by 1. Otherwise, increase it by 1 as before.
It's easy to modify the code from above to work with that by parametrizing the increment:
day5_p2(String) > Offsets = [list_to_integer(N)  N < string:lexemes(String, "\n")], zip([], Offsets, 0, fun day5_p2_incr/1). zip(_, [], Ct, _) > Ct; zip(Prev, [NNext], Ct, Incr) when N >= 0 > {Skipped, NewNext} = take(N, {Prev, [Incr(N)Next]}), zip(Skipped, NewNext, Ct+1, Incr); zip(Prev, [NNext], Ct) when N < 0 > {NewNext, Skipped} = take(abs(N), {[Incr(N)Next], Prev}), zip(Skipped, NewNext, Ct+1, Incr). day5_p2_incr(N) when N >= 3 > N1; day5_p2_incr(N) > N+1.
For good measure, let's retrofit the first one:
day5_p1(String) > Offsets = [list_to_integer(N)  N < string:lexemes(String, "\n")], zip([], Offsets, 0, fun(N) > N+1 end).
And this is a pass. This implementation is between 36 times faster than one with maps or (erlang) arrays, mostly because updating random values on larger data structures is more costly over time, even if the moving is cheaper, than what we have here with the zipperlike approach.
Day 6
Part 1
This one asks of us that we take a given number of memory banks (they say 6) each with a counter, and redistribute the highest value to all neighbours equally until we spot repeated entries.
For example:
0: [0, 2, 7, 0] <== 7 is the max value 1: [2, 4, 1, 2] <== 3rd slot is set to 0, and we add 1 to each in sequence until 7 is evenly distributed 2: [3, 1, 2, 3] <== same with the first 3 value 3: [0, 2, 3, 4] <== same with the 4 value 4: [1, 3, 4, 1] <== same with the 4 value again 5: [2, 4, 1, 2] <== we've seen this in step 1
So we get that good old mutable memory again, but we have the luck to see that all the accesses are sequential. This means lists zippers again. The one thing we don't want to do with that one is spend all our time cycling values. So for example, if I had the memory [0,1000]
it would be quite annoying to go over the list 500 times just to split the value.
Every time we cycle over a structure like that, it helps to think of modulos (remainders). Using them, what we can do is figure out how many times the full list needs to be seen, and then the number of entries left over:
[500,0] : 500 div 2 = 250, 500 rem 2 = 0 <== give each entry 250 [0, 2, 7, 0] : 7 div 4 = 1 (add 1 everywhere), 7 rem 4 == 3 (add 1 to 1,2,3)
If we are clever enough and scan the list by giving the Div
value to each entry, plus one of the Rem
values, we can do each redistribution in a single pass. That's more efficient.
The problem asks us to work with 16 memory slots instead of the 4 from the example. Let's start by formatting the input:
day6_p1(String) > Vals = [list_to_integer(S)  S < string:lexemes(String, "\t ")], Init = {[], Vals}, run(Init, 0, 0, 0, {\#{}, 0, length(Vals)}).
So we break up the values, initialize a zipper, and then call run(Memory, Counter, Div, Rem, {Seen, CycleCounter, MemoryWidth})
. Here are what the values stand for:

Memory
: the current state of the memory banks 
Counter
: whenever distributing entries, we'll need to make sure we only go through each entry once. On a run, the counter will initially be the same asMemoryWidth
and decrement until 0, where we'll know we're done with the redistribution. 
Div
: holds the value of the integer division telling how to increment each number in the sequence 
Rem
: holds the value of the remainder of the division, telling how to increment some numbers in the sequence by 1. This is a counter going down to 0 so that only some entries get incremented. 
Seen
: holds a map of the entries seen so far. I'll be using the map as a set here and won't care what it contains. 
CycleCounter
: counts how many redistributions we've done. Once we see the same entry a second time, this is the number we'll return 
MemoryWidth
: width of the memory banks. The problem specifies a hard 16, but I wanted to test my code over the example case, with a width of 4.
I've put the last 3 values in a tuple, so that all state accounting is grouped under a single argument.
So let's start with the initial case, when we are done with a cycle:
run(Mem, 0, _, 0, {Seen, Cycle, N}) > RMem = zipper_rewind(Mem), case Seen of #{RMem := _} > Cycle; _ > {Prev,[MaxNext]} = zipper_max(RMem), Div = Max div N, Rem = Max rem N, run({[0Prev], Next}, N, Div, Rem, {Seen#{RMem => true}, Cycle+1, N}) end;
A cycle is done when the Counter
value hits 0 (the Rem
value should also be at 0 by then). When that happens, we rewind the zipper to its initial case, which will make it easier to compare to the previously seen entries. We do that by checking its presence in the map. If it's there, we're done and just need to return the cycle count.
If it's not there, we prepare the redistribution: find the max value (zipper_max/1
returns us in the right zipper position), set up the Div
and Rem
value, and start the run again having reset the Max
value to 0, with the counter set to N
(the cached width of memory). We're good to get distributing.
run({Prev, [HNext]}, Ct, Div, 0, State) > run({[H+DivPrev], Next}, Ct1, Div, 0, State); run({Prev, [HNext]}, Ct, Div, Rem, State) > run({[H+Div+1Prev], Next}, Ct1, Div, Rem1, State); run({_, []}=Mem, Ct, Div, Rem, State) > run(zipper_rewind(Mem), Ct, Div, Rem, State).
Those are the 3 clauses. The first one handles the case where there are no remainders. The only thing it has to do is add Div
to every entry in the list until Ct
hits 0, so we do that to one item and then call ourselves recursively. The second clause handles the case where Rem
is not empty: it adds 1 to each entry on top of the Div
value, and decrements the counter before going further.
The last clause is used when we reach the end of the zipper. To simulate a circular list, we rewind the zipper and start from the first element again.
That's it, this finds the good result. All we need to do is add the zipper helper functions:
zipper_rewind({Prev, Next}) > {[], lists:reverse(Prev, Next)}. zipper_max({[], Next}) > Max = lists:max(Next), zipper_max({[], Next}, Max). zipper_max({_, [Max_]}=Zip, Max) > Zip; zipper_max({Prev, [HNext]}, Max) > zipper_max({[HPrev],Next}, Max).
We're now good for part 2
Part 2
This time around, Part 2 is not too painful. All they want to know is how many cycles went between the first time we've seen the repeated value and the final cycle. Fortunately for us we've got all the blocks in place. We just need to change how run
stores data. Instead of putting true
as a value for each entry, we store the cycle at which it was detected:
run(Mem, 0, _, 0, {Seen, Cycle, N}) > RMem = zipper_rewind(Mem), case Seen of #{RMem := PrevCycle} > {PrevCycle, Cycle}; _ > {Prev,[MaxNext]} = zipper_max(RMem), Div = Max div N, Rem = Max rem N, run({[0Prev], Next}, N, Div, Rem, {Seen#{RMem => Cycle}, Cycle+1, N}) end; ...
This value is added in the last clause, and extracted in the first one (as PrevCycle
). We return both values in a tuple. All we need to do is rework the calling functions:
day6_p1(String) > Vals = [list_to_integer(S)  S < string:lexemes(String, "\t ")], Init = {[], Vals}, {_, Cycle} = run(Init, 0, 0, 0, {\#{}, 0, length(Vals)}), Cycle. day6_p2(String) > Vals = [list_to_integer(S)  S < string:lexemes(String, "\t ")], Init = {[], Vals}, {Prev,Current} = run(Init, 0, 0, 0, {\#{}, 0, length(Vals)}), CurrentPrev.
And the second problem is solved.
Day 7
Part 1
For this one we get entries of the format
pbga (66) xhth (57) ebii (61) havc (66) ktlj (57) fwft (72) > ktlj, cntj, xhth qoyq (66) padx (45) > pbga, havc, qoyq tknk (41) > ugml, padx, fwft jptl (61) ugml (68) > gyxo, ebii, jptl gyxo (61) cntj (57)
Which represents the nodes of a tree with their children. We have to find the root of the tree. This one is a graph problem easily solved by using the digraph and digraph_utils
modules: each node is a vertex (with a label equivalent to the weight in parentheses, because I'm sure part 2 will ask for that), and each child is added as an edge between (Parent, Child)
. The digraph_utils:arborescence_root(G)
function can find whether there's a tree in the graph and tell us its root:
day7_p1(String) > Entries = [parse7(Line)  Line < string:lexemes(String, "\n")], {_, {yes, Root}} = build7(Entries), Root. parse7(Line) > [Name, Weight  Held] = string:lexemes(Line, "()>, "), {Name, list_to_integer(Weight), Held}. build7(Entries) > G = digraph:new([acyclic]), [digraph:add_vertex(G, Name, Weight)  {Name, Weight, _} < Entries], [digraph:add_edge(G, Name, Child)  {Name, _, Held} < Entries, Child < Held], {G, digraph_utils:arborescence_root(G)}.
It feels a bit too easy, but this works.
Part 2
As suspected, the weights were mandated, because part 2 asks us to find balance:
for ugml's disc to be balanced, gyxo, ebii, and jptl must all have the same weight, and they do: 61.
However, for tknk to be balanced, each of the programs standing on its disc and all programs above it must each match. This means that the following sums must all be the same:
ugml + (gyxo + ebii + jptl) = 68 + (61 + 61 + 61) = 251 padx + (pbga + havc + qoyq) = 45 + (66 + 66 + 66) = 243 fwft + (ktlj + cntj + xhth) = 72 + (57 + 57 + 57) = 243ugml itself is too heavy: it needs to be 8 units lighter for its stack to weigh 243 and keep the towers balanced.
Given that exactly one program is the wrong weight, what would its weight need to be to balance the entire tower?
So in short, we have to find the weight of all the children of a given node, sum the weight of their own children, and find if anyone is off. If one of them is off, find what the weight should be to make it the same as the rest. It's not too hard, but it's tricky to understand what they exactly are looking for.
Here's my solution, we'll go through it:
day7_p2(String) > Entries = [parse7(Line)  Line < string:lexemes(String, "\n")], {G, {yes, Root}} = build7(Entries), catch diff7(G, Root). diff7(G, Node) > Children = digraph:out_neighbours(G, Node), DChildren = [diff7(G, Child)  Child < Children], Sums = lists:sort([{W+C, W}  {W,C} < DChildren]), case {Sums, lists:reverse(Sums)} of {[], []} > {_, Weight} = digraph:vertex(G, Node), {Weight,0}; {[{X,_}_], [{X,_}_]} > {_, Weight} = digraph:vertex(G, Node), {Weight, X*length(Sums)}; {[{X,_},{X,_}_], [{Y,W}_]} > Diff = X  Y, throw({should_be, W+Diff}); {[{Y,W}_], [{X,_},{X,_}_]} > Diff = X  Y, throw({should_be, W+Diff}) end.
First, we build the same graph as earlier, and then set up a catch
around the diff function. I'm going to throw()
the right result straight up there to avoid having to carry results around the whole tree depth, which would make the code less clear in my opinion. We start the diff at the root of the tree.
For the recursive bit itself, for each node we have, we find all of its children (digraph:out_neighbours/2
). For each of these children, we get their own weight with their own children's weights. This is going depthfirst. If any of the childrens' childrens is unbalanced, a throw will make it so we don't even have to carry the results here, so we can assume that we only have to care for the direct descendants.
To find if all values are the same, I'm sorting it both forwards and backwards. So if the children weights are [1,1,1,3,1,1]
, I'll get the list [1,1,...,3]
and [3,1,...,1]
(or the opposite, if the one standing out is the smallest child). This is not the most efficient way to do it, but it's obviously correct.
The sorting is done by adding the weight of each child to the weight of their own children (W+C
, the cumulative weight) since that's what the problem asks for. I'm also keeping the weight of the child node itself (W
) in the list as well, because that's the one we must correct.
So we get to the big case
expression:
 If the node is childfree, return its weight, with a child sum of 0
 if the node has all identical children (the head and tail sums are the same), then return the node's weight plus the sum of all its children
 if the node is unbalanced, subtract the cumulative weight of the node that is off from the cumulative weight of a correct node. This will give the difference in weight that must be applied to the child that has the wrong weight (
W
in the code). Apply that difference and bail out.
And that's how that one is solved.
Without Digraphs
Let's see how we'd build the same without a directed graph. The first problem is easy, possibly even more than the earlier one: make a list of all nodes, and subtract a list of all the nodes that are children of another one from it. You're left with a single entry that is the root of the tree:
day7_p1_2(String) > Entries = [parse7(Line)  Line < string:lexemes(String, "\n")], Nodes = [E  {E,_,_} < Entries], HaveParent = lists:append([Children  {_,_,Children} < Entries]), hd(Nodes  HaveParent).
Part 2 can essentially be solved the same way as the previous one; we just need to replace the graph with a map that contains every tree node, keeping state of its weight and children. Here's how the map can be built:
day7_p2_2(String) > Entries = [parse7(Line)  Line < string:lexemes(String, "\n")], Map = maps:from_list([{E,{W,C}}  {E,W,C} < Entries]), Nodes = [E  {E,_,_} < Entries], HaveParent = lists:append([Children  {_,_,Children} < Entries]), [Root] = Nodes  HaveParent, catch diff7_2(Map, Root).
It's a straight up conversion. Since we know from the problem definition we have a tree, there's no need to make any fancy graph. On the other hand, we still need to know the root node to know where to start from.
The diff function is a direct translation, with lookups from the graph replaced with lookups from the map:
diff7_2(Map, Node) > #{Node := {Weight, Children}} = Map, DChildren = [diff7_2(Map, Child)  Child < Children], Sums = lists:sort([{W+C, W}  {W,C} < DChildren]), case {Sums, lists:reverse(Sums)} of {[], []} > {Weight,0}; {[{X,_}_], [{X,_}_]} > {Weight, X*length(Sums)}; {[{X,_},{X,_}_], [{Y,W}_]} > Diff = X  Y, throw({should_be, W+Diff}); {[{Y,W}_], [{X,_},{X,_}_]} > Diff = X  Y, throw({should_be, W+Diff}) end.
That one may actually be simpler. And it's apparently a tiny bit faster, as well.
Day 8
Part 1
I enjoyed Day 8. What we have to do is write a tiny interpreter for input like this:
b inc 5 if a > 1 a inc 1 if b < 5 c dec 10 if a >= 1 c inc 20 if c == 10
We're told that all variable/registers start at 0; we don't know ahead of time how many of them there will be, and the following operators are supported: >
, <
, >=
, <=
, ==
, and !=
. We have to evaluate the whole thing. At the end of the run, they want to know the highest value held in any register.
First of all, looking at the input, there's a very strict structure here: <register> <incdec> ±<int> if <register> <cmp> ±<int>
. We can just extract that as we want. I'll use a map to carry the state since the register names can be anything:
day8_p1(String) > Instructions = [parse8(Line)  Line < string:lexemes(String, "\n")], State = lists:foldl(fun run8_p1/2, #{}, Instructions), lists:max(maps:values(State)). parse8(Line) > [VarA, Sign, Num1, "if", VarB, Cmp, Num2] = string:lexemes(Line, " "), { {op(Cmp), VarB, list_to_integer(Num2)}, {VarA, num(Sign, list_to_integer(Num1))}}. op(">") > fun erlang:'>'/2; op("<") > fun erlang:'<'/2; op(">=") > fun erlang:'>='/2; op("<=") > fun erlang:'=<'/2; op("==") > fun erlang:'=='/2; op("!=") > fun erlang:'/='/2. num("inc", N) > N; num("dec", N) > N.
The code parses every line into a tuple of the form { {CmpFunction, A, B}, {Var, N}}
, where the first half is the conditional comparison, and the second half is the increment to give to the variable. You'll note that I just convert strings directly to Erlang functions for each operator, and that I get rid of the inc
or dec
operations by applying their value directly to the integers I'm handling.
The instructions are then passed to a call to lists:foldl/3
, which will run over the entire set and return its state, out of which we grab the max value. The run8_p1/2
function is defined as follows:
run8_p1({ {Cmp, A, B}, {C,N}}, State) > case Cmp(maps:get(A, State, 0),B) of false > State; true > State#{C => maps:get(C, State, 0)+N} end.
Basically, for every comparison, we get the value of the lefthand side operand out of the map (with a default of 0), and if the comparison works, we increment the value of the variable C
accordingly.
This solves the problem.
Part 2
Part 2 asks us to find what the highest value was at any given point in time in any of the registers. That's a simple enough problem to fix, we'll just need a new running function that tracks the highest value it has seen for any C
variable; since that's where we insert any value, that's the only hook point we need. We'll also start at 0 since that's the default value of all registers:
day8_p2(String) > Instructions = [parse8(Line)  Line < string:lexemes(String, "\n")], {_, Max} = lists:foldl(fun run8_p2/2, {\#{}, 0}, Instructions), Max. run8_p2({ {Cmp, A, B}, {C,N}}, {State,Max}) > case Cmp(maps:get(A, State, 0),B) of false > {State, Max}; true > ValC = maps:get(C, State, 0)+N, {State#{C => ValC}, max(Max,ValC)} end.
That was simple enough!
Day 9
Part 1
This day's challenge consists of handling some parsed input in regular text. There are two categories of text defined: groups and garbage.
 Groups are any text between
{
and}
. They can be nested, and input within a group must be validated.  garbage is any content between <and>. They do not nest, but the
!
character can be used as an escape value that cancels any character that follows
The first challenge submits us text like follows:
{}, score of 1. { { {} } }, score of 1 + 2 + 3 = 6. { {},{} }, score of 1 + 2 + 2 = 5. { { {},{},{ {} } } }, score of 1 + 2 + 3 + 3 + 3 + 4 = 16. {<a>,<a>,<a>,<a>}, score of 1. { {<ab>},{<ab>},{<ab>},{<ab>} }, score of 1 + 2 + 2 + 2 + 2 = 9. { {<!!>},{<!!>},{<!!>},{<!!>} }, score of 1 + 2 + 2 + 2 + 2 = 9. { {<a!>},{<a!>},{<a!>},{<ab>} }, score of 1 + 2 = 3.
Basically, each group is worth as many points as the nesting level at which it is.
The easiest way to do this is just to write a simple parser:
day9_p1(String) > group1(String, 0, 0). %% group: { ... } %% garbage < ... > (nonnestable) %% ! = escape within garbage %% group score is +L per group, where L is the nesting level group1([], _, Acc) > Acc; group1([${Rest], Lvl, Acc) > group1(Rest, Lvl+1, Acc); group1([$}Rest], Lvl, Acc) > group1(Rest, Lvl1, Acc+Lvl); group1([$<Rest], Lvl, Acc) > group1(garbage1(Rest), Lvl, Acc); group1([_Rest], Lvl, Acc) > group1(Rest, Lvl, Acc). garbage1([$!,_Rest]) > garbage1(Rest); garbage1([$>Rest]) > Rest; garbage1([_Rest]) > garbage1(Rest).
Quite simply, the groups track their own level (starting at 0). Every time a bracket is encountered ({
), the level is incremented by 1, and every time a bracket closes (}
), we add that level to the accumulator Acc
and then decrement the level for the next round. If garbage is encountered, we eliminate it before returning the leftover text.
I've also decided to put a catchall clause to the last group1/3
clause so that it will catch commas and possibly any other irregularity.
Part 2
Same rules, except now what is asked of us is that we track the count of how many nonescaped garbage characters were encountered:
<>, 0 characters. <random characters>, 17 characters. <<<<>, 3 characters. <{!>}>, 2 characters. <!!>, 0 characters. <!!!>>, 0 characters. <{o"i!a,<{i<a>, 10 characters.
Roughly the same code can be used, except we'll use different accounting rules:
day9_p2(String) > group2(String, 0). group2([], Acc) > Acc; group2([${Rest], Acc) > group2(Rest, Acc); group2([$}Rest], Acc) > group2(Rest, Acc); group2([$<Rest], Acc) > {N, Next} = garbage2(Rest, 0), group2(Next, N+Acc); group2([_Rest], Acc) > group2(Rest, Acc). garbage2([$!,_Rest], Acc) > garbage2(Rest, Acc); garbage2([$>Rest], Acc) > {Acc, Rest}; garbage2([_Rest], Acc) > garbage2(Rest, Acc+1).
As you can see, the parsing logic is quite the same, but now garbage2/2
does its own accounting, and group2/2
's own accumulator only carries the count from garbage call to garbage call.
Day 10
Part 1
I hated that one. What a bad problem description. Just take a look: http://adventofcode.com/2017/day/10
All that text basically says that the code must do the following:
 you're going to work on a sequence of 256 bytes, with values 0..255 in that order
 you start at index 0 of that sequence
 you're given a list of integers to work with
 for each integer
N
, make a selection of that many bytes in the sequence from your current position, and wrap around if you reach the end of the sequence  reverse the selected sequence and reinsert it in place in the original sequence
 shift your position forward by the value
N
plus the number of rounds seen so far.
So if you start with a sequence of bytes 0,1,2,3,4
and the input [3,4,1,5]
you get:
v v 0,1,2,3,4 > 2,1,0,3,4 '...' ^ 3 (+3 +0) v v 2,1,0,3,4 > 4,3,0,1,2 '.. ^ ...' 4 (+4 +1) v v 4,3,0,1,2 > 4,3,0,1,2 ' ^ 1 (+1 +2) v v 4,3,0,1,2 > 3,4,2,1,0 '...... ^ .' 5 (+5 +3)
With that calcualtion done, grab the first two numbers of the sequence, and multiply them.
So help me god, we had to implement that thing. It's not that hard, it's just convoluted. Using zippers is a bit annoying because we need to maintain the initial start point, and doing so while keeping the wrapping + reversing sequence intact is kind of a pain. Since we're working in bytes, a binary is wellindicated.
Let's start by generating our input as a binary, and converting the strings:
day10_p1(String, Len) when Len > 0, Len =< 256 > Inputs = [list_to_integer(S)  S < string:lexemes(String, ",")], Base = << <<X>>  X < lists:seq(0, Len1) >>, <<X,Y, _/binary>> = day10_run(Base, 0, Inputs, 0), X*Y.
The hash function is day10_run(Bin, Position, Inputs, Step)
, where Position
is the current location of the cursor, and Step
is the everincrementing counter for cursor shifts. That function returns a binary (the hash), from which I extract the two numbers that get multiplied. The function will iterate over the sequence as follows:
day10_run(Bin, _, [], _) > Bin; day10_run(Bin, Pos, [HT], Step) > ???
The tricky aspect of the function here is figuring out all the right lengths of slices we have to cut in our binary. A naive approach may look like this:
0 Pos Pos+H Size      unchanged start seq  selected seq  unchanged end seq 
With the start sequence (0..Pos
) and the end sequence ((Pos+H)..Size
) possibly having a length of 0. For example, if the first iteration has a starting position at 0 and H
is as large as the binary, then the selected sequence is the full thing.
This approach fails whenever we get a selected value that wraps around the binary. Instead we have to split it in 4 sequences:
0 Lead Pos Pos+H Size       wrapping lead  unchanged start seq  selected seq  unchanged end seq 
So the Lead may have a length of 0 or more, but if it has a length greater than 0, then the unchanged end sequence is forced to have a length of 0 since it means the selected sequence fills the whole trailing part of the binary.
day10_run(Bin, _, [], _) > Bin; day10_run(Bin, Pos, [HT], Step) > %% Calculate ranges Size = byte_size(Bin), LeadLen = max(0, (Pos + H)  Size), StartLen = Pos  LeadLen, SelectLen = H  LeadLen, EndLen = Size  (LeadLen+StartLen+SelectLen), %% Match sequences to modify <<Lead:LeadLen/binary, Start:StartLen/binary, Select:SelectLen/binary, End:EndLen/binary>> = Bin, %% Reverse sequence <<NewSelect:SelectLen/binary, NewLead:LeadLen/binary>> = list_to_binary(lists:reverse(binary_to_list(<<Select/binary, Lead/binary>>))), %% Reinsert sequence NewBin = <<NewLead/binary, Start/binary, NewSelect/binary, End/binary>>, %% Run again day10_run(NewBin, (Pos+H+Step) rem Size, T, Step+1).
With the lengths defined, then cutting the binary up is a matter of just selecting each segment. To reverse the values I had to convert to a list first (there's a messy trick by just converting the endianness of bytes of a subsequence that lets you reverse an arbitrary byte sequence in O(1), but let's ignore that). The list is converted back into a binary, where the lead and selected sequence are chosen back before being reinjected in a new updated binary.
Then the cursor is moved forward and off we go!
Part 2
Part 2 is a god damn pain in the ass is what it is. Not content with that convoluted mechanism, you get another page of explanations. The hash remains the same, but now what you do is:
 Use the raw byte values of the input string instead of an integer conversion
 Add the sequence of integers
[17, 31, 73, 47, 23]
at the end of the input  Run the hash 64 times in a row (without resetting the step counter nor shifting the cursor back to 0)
 Take slices of 16 bytes of the final binary and
xor
each of the bytes of the sequence together to compact the whole thing  Convert to a lowercase hexadecimal string
Put like that it's kind of straightforward, but it's still kind of painful as I'm doing this one exercise on the morning after our office christmas party, on a bus. So here goes:
day10_p2(RawInputs) > Inputs = lists:append(lists:duplicate(64, RawInputs ++ [17,31,73,47,23])), Base = << <<X>>  X < lists:seq(0, 255) >>, SparseHash = day10_run(Base, 0, Inputs, 0), DenseHash = << <<(xor10(Bin))>>  <<Bin:16/binary>> <= SparseHash >>, string:lowercase(to_hex(DenseHash)).
The Erlang string format is real friendly to the first step, since I just use lists of integers. Rather than modifying the tricky hash function to carry its values 64 times, I decided to pay the memory cost of duplicating the input sequence 64 times and injecting that in there. Screw this, let's reuse the part 1's code.
Then, I use a binary comprehension to grab binaries 16 bytes large, passed to a xor function specifically designed for this day 10 challenge.:
xor10(<<X:8, Rest/binary>>) > xor10(Rest, X). xor10(<<>>, N) > N; xor10(<<X:8, Rest/binary>>, N) > xor10(Rest, X bxor N).
Do note that I had to use bxor
here since xor
is an operator dedicated to boolean values, rather than integers.
Hexadecimal conversion just goes the lazy way:
to_hex(Bin) > iolist_to_binary([io_lib:format("~2.16.0B", [X])  <<X:8>> <= Bin]).
It's not particularly efficient, but at that point I didn't feel like implementing my own padding logic anymore. It would have been something like:
case integer_to_list(N, 16) of [X] > [$0,X]; S > S end
applied to every element. But ugh. Day done.
Day 11
Part 1
Oh, a hex grid. I know nothing about them really. I tried a few naive ways to go about things and quickly found that any system I tried just worked with the intuitive (x,y)
twodimensional coordinate system, and it would lead me nowhere fast. I searched for hex grid coordinate systems online, and eventually found this page: https://www.redblobgames.com/grids/hexagons/
The page is super helpful and contains visual maps of various hex maps with coordinate systems for each of them, with the theory about how they map to a 3dimensional cube. Cool stuff. I haven't read the full thing yet (I will), but found that the layout given by the exercise:
x (+y) ++ (z) z / \ y ++ B ++ / \ / \ ++ I ++ C ++ / \ / \ / \ (x) + H ++ A ++ D + (+x) \ / \ / \ / ++ G ++ E ++ \ / \ / ++ F ++ \ / y ++ z (+z) (y) x
Is named the oddq layout, with flattopped hexagons. The grid here allows free movement to the north and south, but not east and west. I've labelled the axes on it, plus the directions of each axes in parentheses. Movement along ay of the x, y, or z axis is neutral, but must change the others accordingly. In short:
 a movement to the north is (x, z1, y+1)
 a movement to the northeast is (x+1, z1, y)
 a movement to the southeast is (x+1, z, y1)
 a movement to the south is (x, z+1, y1)
 a movement to the southwest is (x1, y, z+1)
 a movement to the northwest is (x1, y+1, z)
Just looking at a map with labelled coordinates (go see on the website), it soon becomes fairly apparent that the number of steps to go in any direction appears to be the maximal absolute value of any single axis coordinate. So if you pick the tile at coordinates {3,2,1}
it will take 3
steps only to get there.
This gives us the solution instantly: just follow the movements and find the maximal absolute value of the resulting coordinate:
day11_p1(String) > Tiles = string:lexemes(String, ","), count_steps(hex_follow(Tiles, {0,0,0})). hex_follow([], Coord) > Coord; hex_follow(["n"S], {X,Y,Z}) > hex_follow(S, {X,Y+1,Z1}); hex_follow(["s"S], {X,Y,Z}) > hex_follow(S, {X,Y1,Z+1}); hex_follow(["ne"S], {X,Y,Z}) > hex_follow(S, {X+1,Y,Z1}); hex_follow(["sw"S], {X,Y,Z}) > hex_follow(S, {X1,Y,Z+1}); hex_follow(["nw"S], {X,Y,Z}) > hex_follow(S, {X1,Y+1,Z}); hex_follow(["se"S], {X,Y,Z}) > hex_follow(S, {X+1,Y1,Z}). count_steps({X,Y,Z}) > lists:max([abs(X),abs(Y),abs(Z)]).
That works.
Part 2
Fortunately, part 2 just asks us to find the maximal value at any point in time. No need for fancier solution, all we need to do is carry around a max value accounted from each position seen so far. Here it is:
day11_p2(String) > Tiles = string:lexemes(String, ","), hex_max(Tiles, {0,0,0}, 0). hex_max([], Coord, Max) > max_steps(Max, Coord); hex_max(["n"S], C={X,Y,Z}, Max) > hex_max(S, {X,Y+1,Z1}, max_steps(Max,C)); hex_max(["s"S], C={X,Y,Z}, Max) > hex_max(S, {X,Y1,Z+1}, max_steps(Max,C)); hex_max(["ne"S], C={X,Y,Z}, Max) > hex_max(S, {X+1,Y,Z1}, max_steps(Max,C)); hex_max(["sw"S], C={X,Y,Z}, Max) > hex_max(S, {X1,Y,Z+1}, max_steps(Max,C)); hex_max(["nw"S], C={X,Y,Z}, Max) > hex_max(S, {X1,Y+1,Z}, max_steps(Max,C)); hex_max(["se"S], C={X,Y,Z}, Max) > hex_max(S, {X+1,Y1,Z}, max_steps(Max,C)). max_steps(Max, Coords) > max(count_steps(Coords), Max).
You can see that I reuse the count_steps/1
function from part 1, and the rest is pretty straightforward if you understood the first part.
Day 12
Part 1
A graph algorithm! This means we'll cheat with digraph
and laugh all the way to the bank.
We're given input like this:
0 <> 2 1 <> 1 2 <> 0, 3, 4 3 <> 2, 4 4 <> 2, 3, 6 5 <> 6 6 <> 4, 5
which represent an undirected graph (vertice 2 is connected to vertices 0, 3, and 4, and the opposite is also true). The question asks to find how many vertices are in vertice's 0 connected component. The trick here would be to do a breadthfirst search from node 0, marking all the nodes we can reach from it until we can't reach any other nodes that we have not seen before. The number of nodes seen (including 0) is the answer to this problem.
The digraph_util
module has a reachable/2
function that does this for us:
day12_p1(String) > G = digraph:new(), Pairs = [parse12(Row)  Row < string:lexemes(String, "\n")], graph12(G, Pairs), length(digraph_utils:reachable(["0"], G)). parse12(Row) > [Node  Neighbours] = string:lexemes(Row, "<> ,"), {Node, Neighbours}. graph12(G, Pairs) > [digraph:add_vertex(G, V)  {V,_} < Pairs], [[begin digraph:add_edge(G,N,V), digraph:add_edge(G,V,N) end  N < Neighbours]  {V, Neighbours} < Pairs], ok.
That's the correct answer.
Part 2
Part 2 instead asks for the total number of connected components in the graph. This would mean going from 0, and getting the list of all the seen nodes so far as in part 1. Then once that component is fully discovered, start the same thing from a vertex that has not been visited yet; that's another component. Rinse and repeat until all is done. digraph_utils
has the components/1
function for that:
day12_p2(String) > G = digraph:new(), Pairs = [parse12(Row)  Row < string:lexemes(String, "\n")], graph12(G, Pairs), length(digraph_utils:components(G)).
It is a bit easier that way.
Day 13
Part 1
We're given an input of:
0: 3 1: 2 4: 4 6: 4
Wich is equivalent to the following diagram, at step 0:
v 0 1 2 3 4 5 6 [s] [s] ... ... [s] ... [s] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]
At step 1, the diagram now looks like this:
v 0 1 2 3 4 5 6 [ ] [ ] ... ... [ ] ... [ ] [s] [s] [s] [s] [ ] [ ] [ ] [ ] [ ]
And at step 4, it looks like this:
v 0 1 2 3 4 5 6 [s] [s] ... ... [ ] ... [ ] [ ] [ ] [ ] [ ] [ ] [s] [s] [ ] [ ]
The idea being that in each column, the depth cursor (s
) moves up and down, while at each step, the overall layer cursor (v
) moves one step to the right. The objective is figuring out if in any sequence, v
meets s
at the top of the column. If they do meet, then add the result of multiplying the column depth with the column's index.
At a glance, this can be fixed with modulo arithmetic (again!). Since each layer index matches with each step (layer 4 is hit on the 4th step), we need to figure out if the layer index evenly divides the column depth. The trick for the column depth is that we must count the intervals, not the depth itself. For example, the column at layer 4 has a depth of 4, but will only be at 0 after 6 steps! That's because there are 3 steps required to reach the bottom of the column, and 3 steps back up. The steps in each direction are twice the column depth minus one:
day13_p1(String) > Config = [{list_to_integer(Layer), list_to_integer(Range)}  Row < string:lexemes(String, "\n"), [Layer,Range] < [string:lexemes(Row, ": ")]], lists:sum([Layer*Range  {Layer, Range} < Config, Layer rem ((Range1)*2) == 0]).
All the important stuff is in Layer rem ((Range1)*2) == 0
, which checks that the layer index evenly divides with the number of steps required ((Range1)*2)
).
Part 2
We are now asked to figure out how many steps we need to wait if we want to guarantee that the two cursors will never meet. The easy straightforward way to do it is through trial and error:
day13_p2(String) > Config = [{list_to_integer(Layer), list_to_integer(Range)}  Row < string:lexemes(String, "\n"), [Layer,Range] < [string:lexemes(Row, ": ")]], day13_attempts(0, Config). day13_attempts(N, Config) > try [throw(caught)  {Layer, Range} < Config, (N+Layer) rem ((Range1)*2) == 0] of _ > N catch caught > day13_attempts(N+1, Config) end.
Basically, the day13_attempts/2
function takes an offset N
, which represents the number of delayed steps. That value is used in (N+Layer) rem ((Range1)*2) == 0
to make the same calculation but with an included delay. We use nonlocal returns (throw(caught)
) to abort as soon as caught, and then recurse with one more step waiting. If a run is done without conflict, the value is returned. The calculation could be made faster by precomputing the steps for each column depth (range), but things were fast enough as they were.
I suspect there's a way to find the value without doing an exhaustive search (something about comparing cycle durations across all ranges), but I could not come up with it intuitively or in a short amount of time.
Day 14
Part 1
This one asks us to reuse the hash from Day 10 part2, and generate a grid of bits out of it. We're given an input string, and should append "0"
up to "127"
at the end of it, hash each string individually, decode them from hex into binary, and turn up with a 128x128 grid of bits.
Part 1 asks how many of these bits are set to 1:
day14_p1(String) > Inputs = [String ++ "" ++ integer_to_list(N)  N < lists:seq(0,127)], KnotHashes = << <<(binary_to_integer(day10_p2(Str), 16)):128>>  Str < Inputs >>, lists:sum([1  <<1:1>> <= KnotHashes]).
Bit syntax is fairly useful here. First line generates the inputs, second line the hashes (decoded into binary), and the third line counts the bits set to 1.
Part 2
Part 2 asks us to find how many contiguous groups of 1
values are adjacent to eachother (without diagonals):
In a given example, they give us the following nine regions that are visible, each marked with a distinct digit:
11.2.3..> .1.2.3.4 ....5.6. 7.8.55.9 .88.5... 88..5..8 .8...8.. 88.8.88.>   V V
While the region marked 8 does not appear contiguous in this small view, all of the squares marked 8 are connected when considering the whole 128x128 grid.
So running that one will require us to do a depthfirst search. Every time we find a bit set to 1, we'll enter a search where we'll look for each of its neighbours; if their value is 1, we recurse; if not, we go back up. To make things a bit more efficient, we'll use a map of each visited positive value so that we don't end up searching the same area again and again in an infinite loop.
Also Erlang has no good grid data structure, so I'll use a single large binary with positions {Y,X}
(for row and column respectively) and just multiply Y
by a known width to give the right offset:
day14_p2(String) > Inputs = [String ++ "" ++ integer_to_list(N)  N < lists:seq(0,127)], Grid = << <<(binary_to_integer(day10_p2(Str), 16)):128>>  Str < Inputs >>, day14_search(Grid, {0,0}, #{}, 0). day14_search(_Grid, {128,_}, _Map, N) > N; day14_search(Grid, Pos, Map, N) > case not maps:is_key(Pos, Map) andalso day14_at(Grid, Pos) of 1 > NewMap = day14_group(Grid, Pos, Map, N+1), day14_search(Grid, next14(Pos), NewMap, N+1); _ > day14_search(Grid, next14(Pos), Map, N) end.
This gives the foundation for the search: iterate over each square; if it's positive, go for the group search, if not, keep iterating until the end. When done, it returns the group count N
. We hardcode the termination at the 128
row (129th one, out of bounds). Navigation is done with these two functions:
day14_at(Grid, {Y,X}) when X >= 0, X =< 127, Y >= 0, Y =< 127 > Pos = Y*128+X, <<_:Pos/bits, Bit:1, _/bits>> = Grid, Bit; day14_at(_, _) > 0. next14({Y,127}) > {Y+1, 0}; next14({Y,X}) > {Y, X+1}.
The first one just uses the multiplication trick to give the position; if the value is out of bound, we just return 0 since we only care for groups and 0s are not in there. A bit messy but it works well. The second function manually wraps around from the last column to the next row.
Now for the recursive group search:
day14_group(Grid, Pos={Y,X}, Map, N) > case not maps:is_key(Pos,Map) andalso day14_at(Grid, Pos) of 1 > lists:foldl(fun(NPos, M) > day14_group(Grid, NPos, M, N) end, Map#{Pos => N}, [{Y+1,X},{Y1,X},{Y,X+1},{Y,X1}]); _ > Map end.
This is simply a question of building a series of lookups to be queued up. With the map weaved in and out at every level, we ensure it eventually stops, once the group is complete. The same lookup is done in the day14_search
function, ensuring that we don't rescan groups.
Day 15
Part 1
This one asks us to use two data generators, each with a distinct formula. A number is generated from each generator, and the last 16 bits of these are compared. If the last 16 bits are the same, we count the pair as matching. Each iteration of each generator uses its previous value as a seed.
The first generator will take the previous (seed) value and multiply it by 16807 and keeps the remainder from dividing the product by 2147483647. The second generator works the same way, but uses 48271 as a multiplication factor.
We have to find how many pairs match in the first 40,000,000 ones, based on input seeds:
day15_p1(A,B) > day15_p1(40000000, 0, A, B). day15_p1(0, Acc, _, _) > Acc; day15_p1(N, Acc, A, B) > NA = next_a1(A), NB = next_b1(B), day15_p1(N1, Acc+judge(NA,NB), NA, NB). next_a1(Prev) > (Prev*16807) rem 2147483647. next_b1(Prev) > (Prev*48271) rem 2147483647. judge(A,B) > case <<A:16>> =:= <<B:16>> of true > 1; false > 0 end.
That's rather straightforward. The interesting bit is converting the integers directly to a 16 bitswide binary with <<N:16>>
. If the number is larger, Erlang truncates and keeps the part of the integers that fit. For example, <<16#ABCDEF:8>>
will yield <<16#EF:8>>
. Then we can just compare the values directly.
Part 2
Part 2 keeps the same mechanism, but reduces the count to 5,000,000 pairs and modifies the generators so they iterate until they find a number that divides by 4 and 8, respectively:
day15_p2(A,B) > day15_p2(5000000, 0, A, B). day15_p2(0, Acc, _, _) > Acc; day15_p2(N, Acc, A, B) > NA = next_a2(A), NB = next_b2(B), day15_p2(N1, Acc+judge(NA,NB), NA, NB). next_a2(Prev) > case (Prev*16807) rem 2147483647 of Next when Next rem 4 =:= 0 > Next; Next > next_a2(Next) end. next_b2(Prev) > case (Prev*48271) rem 2147483647 of Next when Next rem 8 =:= 0 > Next; Next > next_b2(Next) end.
Recursion kind of makes it straightforward. Nothing but the generators and the first value got modified.
Day 16
Part 1
This one has us start from a sequence a..p
, with a sequence of operations as input:

sX
, which means to take the X trailing letters and bring them to the front of the sequence 
xA/B
, which asks to swap A and B by position 
pA/B
, which asks to swap A and B by name
This is called a dance. I decided to use a binary since it's easy to slice through and all values fit within one byte:
day16_p1(String) > Moves = [parse16(Move)  Move < string:lexemes(String, ",")], Base = << <<($a+N)>>  N < lists:seq(0, 15) >>, lists:foldl(fun dance16/2, Base, Moves). parse16("s"++N) > {s, list_to_integer(N)}; parse16("x"++Str) > [SA,SB] = string:lexemes(Str, "/"), A = list_to_integer(SA), B = list_to_integer(SB), {x, min(A,B), max(A,B)}; parse16("p"++Str) > [[A],[B]] = string:lexemes(Str, "/"), % just the char value {p, A, B}.
Parsing is done by just pattern matching and translating to an Erlang term. The 'dance' itself is implemented as:
dance16({s, N}, Bin) > Lead = byte_size(Bin)N, <<Head:Lead/binary, Tail:N/binary>> = Bin, <<Tail/binary, Head/binary>>; dance16({x,A,B}, Bin) > Mid = (BA)1, <<Head:A/binary, X, Center:Mid/binary, Y, Tail/binary>> = Bin, <<Head/binary, Y, Center/binary, X, Tail/binary>>; dance16({p,A,B}, Bin) > {PosA,_} = binary:match(Bin, <<A>>), {PosB,_} = binary:match(Bin, <<B>>), dance16({x, min(PosA,PosB), max(PosA,PosB)}, Bin).
The first clause does the slicing, the second one swaps positions by replacing the values, and the swap by name looks up the names' positions, and then hands it back off to the second clause.
Part 2
Same problem, but we must reapply all the transformations a billion times. I initially tried the obvious thing and just bruteforcing. However, this took forever and yielded no interesting results. Even after trying some optimizations, it took too long.
One possibility that came to mind was to try and flatten the transformations into a repetitive batch, but that sounded a lot harder than just caching the results until a cycle is found:
day16_p2(String) > Moves = [parse16(Move)  Move < string:lexemes(String, ",")], Base = << <<($a+N)>>  N < lists:seq(0, 15) >>, loop16(1000000000, Moves, Base, #{Base => 0}, 1).
Similar to the first approach, but with a cache of entries seen with a counter of which step they were spotted at, and then a counter:
loop16(0, _, Bin, _Map, _Ct) > Bin; loop16(N, Moves, OldBin, Map, Ct) > Bin = lists:foldl(fun dance16/2, OldBin, Moves), case Map of #{Bin := X} > loop16((N1) rem (CtX), Moves, Bin, #{}, 0); _ > loop16(N1, Moves, Bin, Map#{Bin => Ct}, Ct+1) end.
First clause is triggered when we are done, returning the final step. The other clause applies the sequence, and looks it up in the cache. If it's not there, we keep cycling. If it's in there, we reduce the number of iterations left (N1
) by the number of steps in the cycle (CtX
) we can fit in there ((N1) rem (CtX)
), clear the cache, and then keep going from the rest.
There's no guarantee there was a cycle, but there is indeed one early on (after about 60 iterations or so), and we can then get the result in a short amount of time.
Day 17
Part 1
This one asks us to use an evergrowing circular buffer. At each round, we move by a predefined number of steps over the existing buffer. Once we're done, we insert the round number in the buffer (expanding it in the process), and start over again.
Here's a sample with a step of 3:
0 ^ 0 1 ^ 0 2 1 ^ 0 2 3 1 ^ 0 2 4 3 1 ^ 0 5 2 4 3 1 ^ 0 5 2 4 3 6 1 ^
The next iteration would move the cursor from 6 to 1 to 0 to 5, and insert the round number (7) after:
0 5 7 2 4 3 6 1 ^
The page asks us to do this for 2017 iterations, and then say which number is right after 2017:
day17_p1(Steps) > buffer17([], [0], 0, Steps, 1, 2017). buffer17(Pre, Post, _, _, _, 0) > L = lists:reverse(Pre, Post), case lists:dropwhile(fun(X) > X =/= 2017 end, L) of [_,X_] > X; [_] > hd(L); [] > hd(tl(L)) end; buffer17(Pre, [], StepsLeft, Steps, N, Rounds) > buffer17([], lists:reverse(Pre), StepsLeft, Steps, N, Rounds); buffer17(Pre, Post, 1, Steps, N, Rounds) > buffer17(Pre, [NPost], Steps, Steps, N+1, Rounds1); buffer17(Pre, [HPost], StepsLeft, Steps, N, Rounds) > buffer17([HPre], Post, StepsLeft1, Steps, N, Rounds).
The function once again uses the zipper format of two lists to represent the circular buffer. buffer17
takes 6 arguments:

Pre
: the buffer elements seen so far 
Post
: the buffer elements ahead; the first element of this list is the current cursor 
StepsLeft
: a counter of steps left to travel in the current iteration 
Steps
: the number of steps passed as an argument (so that we can resetStepsLeft
at each iteration), 
N
: the counter of rounds seen so far 
Rounds
: the counter of rounds left to do.
The first function clause is the final one. When we're finished, we drop all list elements until we hit 2017, and then grab the one after. There's some handling to do for ends of lists, but that's alright. The other clauses step through the buffer and do insertion.
Part 2
Part 2 does us that thing where we can't bruteforce our way. The mechanism still works the same, but we have 50 million iterations instead of 2017, and we are asked to find the value right after 0 instead of the one after 2017. This is good, because we operate from a fixed starting point (what is in position 1) and so we can observe patterns:
0 len=N insert at ? 0 1 len=1, insert at 1 0 2 1 len=2, insert at 1 0 2 3 1 len=3, insert at 2 0 2 4 3 1 len=4, insert at 2 0 5 2 4 3 1 len=5, insert at 1 0 5 2 4 3 6 1 len=6, insert at 5
The pattern is not necessarily obvious, but we can guess that it's a formula that will require using the length, the previous position, and the step count. The position at any step can be checked to be the point of the previous iteration, plus the number of steps, divided by the length of the buffer. Take the remainder, and that gives you the position after which to insert the item. The mathematical way to representat would be (PrevPos + StepsPerRound) rem Length + 1
, with the final +1
because we insert after the position we had. A quick manual validation with the samples above show this to be true:
? = 1 + (pos+step) rem len 1 = 1 + (0+3) rem 1 1 = 1 + (1+3) rem 2 2 = 1 + (1+3) rem 3 2 = 1 + (2+3) rem 4 1 = 1 + (2+3) rem 5 5 = 1 + (1+3) rem 6
That's the relationship we need. The only thing we need to do to find the number right after 0 is track which length we were at whenever we hit a remainder of 1, and keep the latest value:
day17_p2(Steps) > count17(1, Steps, 0, 50000000, undefined). count17(_Len, _Steps, _At, 0, Last) > Last; count17(Len, Steps, At, N, Last) > Ins = 1 + (At+Steps) rem Len, NewLast = case Ins of 1 > Len; _ > Last end, count17(Len+1, Steps, Ins, N1, NewLast).
And this solves part 2.
Day 18
Part 1
Another day for symbolic execution! We're given a program with registers and instructions, the following of which are defined:

snd X
plays a sound with a frequency equal to the value ofX
. 
set X Y
sets registerX
to the value ofY
. 
add X Y
increases registerX
by the value ofY
. 
mul X Y
sets registerX
to the result of multiplying the value contained in registerX
by the value ofY
. 
mod X Y
sets registerX
to the remainder of dividing the value contained in registerX
by the value ofY
(that is, it setsX
to the result ofX modulo Y
). 
rcv X
recovers the frequency of the last sound played, but only when the value ofX
is not zero. (If it is zero, the command does nothing.) 
jgz X Y
jumps with an offset of the value ofY
, but only if the value ofX
is greater than zero. (An offset of 2 skips the next instruction, an offset of 1 jumps to the previous instruction, and so on.)
One gotcha is that Y
might always be either a register reference or a literal integer value, but we don't know ahead of time.
The translation to Erlang is straigthforward. Let's start with the parsing:
day18_p1(String) > Inst = [parse18(S)  S < string:lexemes(String, "\n")], exec18_1([], Inst, #{}). parse18(Str) > [Inst  Args] = string:lexemes(Str, " "), {list_to_atom(Inst), [try list_to_integer(A) catch error:badarg > A end  A < Args]}.
This sets up the overall execution. I'm using a try ... catch
in the parsing so that either I have a literal integer or a string representing a register name. I'll do a dynamic dispatch with it using this function:
reg_or_val(_, X) when is_integer(X) > X; reg_or_val(Map, X) > maps:get(X, Map, 0).
This lets me treat any operand as a value regardless of its format. Now we can implement the whole thing. Do note I'm still using a zipper structure since jgz
may go backwards in the instruction set:
exec18_1(Prev, [{snd, [X]}=HT], Map) > exec18_1([HPrev], T, Map#{played => reg_or_val(Map, X)}); exec18_1(Prev, [{set, [X,Y]}=HT], Map) > exec18_1([HPrev], T, Map#{X => reg_or_val(Map, Y)}); exec18_1(Prev, [{add, [X,Y]}=HT], Map) > exec18_1([HPrev], T, Map#{X => reg_or_val(Map, X) + reg_or_val(Map, Y)}); exec18_1(Prev, [{mul, [X,Y]}=HT], Map) > exec18_1([HPrev], T, Map#{X => reg_or_val(Map, X) * reg_or_val(Map, Y)}); exec18_1(Prev, [{mod, [X,Y]}=HT], Map) > exec18_1([HPrev], T, Map#{X => reg_or_val(Map, X) rem reg_or_val(Map, Y)}); exec18_1(Prev, [{rcv, [X]}=HT], Map) > case reg_or_val(Map, X) of 0 > exec18_1([HPrev], T, Map); _ > maps:get(played, Map) end; exec18_1(Prev, [{jgz, [X, Y]}=HT], Map) > case reg_or_val(Map, X) > 0 of false > exec18_1([HPrev], T, Map); true > {NewPrev,NewNext} = rewind18({Prev,[HT]}, reg_or_val(Map, Y)), exec18_1(NewPrev, NewNext, Map) end. rewind18(Zipper, 0) > Zipper; rewind18({[HT], Next}, N) when N < 0 > rewind18({T, [HNext]}, N+1); rewind18({Prev, [HT]}, N) when N > 0 > rewind18({[HPrev], T}, N1).
There's nothing surprising that we haven't seen so far here. The rewind18/2
function is a variation on zipper navigation we've used 23 times by now. Running this code directly gives us the right answer.
Part 2
Oh this one is fun for an Erlang user. The problem definition remains the same as before, with one exception: snd
and rcv
have been modified to map perfectly to Erlang's message passing semantics:

snd X
sends the value ofX
to the other program. These values wait in a queue until that program is ready to receive them. Each program has its own message queue, so a program can never receive a message it sent. 
rcv X
receives the next value and stores it in registerX
. If no values are in the queue, the program waits for a value to be sent to it. Programs do not continue to the next instruction until they have received a value. Values are received in the order they are sent.  On top of this the register
"p"
should hold the program identifier (0 or 1) for each program as an initial value
I've been stuck reimplementing mutable concepts all advent, and now it's my turn to be lazy while most other folks are stuck reimplementing concurrency!
day18_p2(String) > Inst = [parse18(S)  S < string:lexemes(String, "\n")], P0 = spawn(fun() > init18(0, Inst) end), P1 = spawn(fun() > init18(1, Inst) end), io:format("P0 = ~p, P1 = ~p~n", [P0,P1]), P0 ! P1, P1 ! P0, ok. init18(N, Inst) > receive Other > exec18_2([], Inst, #{"p" => N, sent => 0, peer => Other}) end.
This initialization parses as it did before, but then spawns two instances of the program. I output the pids of both of them for easy recognition in the terminal output, and then send them each other's identifier. In the init18/2
function, the process takes its own number, sticks it in "p"
, and then stores a counter for the number of sent messages and the other Pid (in peer
).
The rest of the implementation is the same, except for the receive and send instructions:
... exec18_2(Prev, [{snd, [X]}=HT], Map=#{sent := Sent, peer := Pid}) > Pid ! reg_or_val(Map, X), exec18_2([HPrev], T, Map#{sent => Sent+1}); exec18_2(Prev, [{rcv, [X]}=HT], Map) > receive Msg > exec18_2([HPrev], T, Map#{X => Msg}) after 1000 > io:format("~p dying, sent ~p~n", [self(), maps:get(sent, Map, 0)]) end; ...
We detect the deadlock by waiting 1 second with nothing happening. Since the program execution in part 1 took less than a second, that's a fairly safe bet. This works and gives us output like this:
1> advent:day18_p2(String). P0 = <0.1621.0>, P1 = <0.1622.0> ok <0.1622.0> dying, sent 5969 <0.1621.0> dying, sent 6096
So the right answer here would have been 5969
.
Day 19
Part 1
Given a diagram like:
  ++ A  C FE+    D +B+ ++
in which we start at the 
on the first line (there is only one guaranteed), follow the path of the diagram. In this diagram we'd go through A, B, C, D, and F, one after the other, in sequence. The characters 
or 
or letters themselves do not change the current direction, only +
does.
The question asks us to find which letters are encountered in which order in a much larger diagram.
Rather than building a graph of paths, we'll directly navigate the binary structure, a bit as we did in Day 14, Part 2. By making the whole map a big grid, we can use offsets to jump at random points by coordinates without a problem. Let's prepare and pack the binary. The first step was to test that the diagram in our sample input does contain an equal number of characters on each line (meaning the string is drawing a square); if not, we'd have to pad it ourselves. It turns out it's a square, so that's easy. We can get going:
day19_p1(String) > [First_] = Lines = string:lexemes(String, "\n"), Width = length(First), % all lines are the same width Grid = iolist_to_binary(Lines), {Start,1} = binary:match(list_to_binary(First), <<"">>), dia19(down, Grid, {Start, 0}, Width, []).
This captures the width of the grid, drops all the linebreaks (they're not reachable coordinates) as part of string:lexemes/2
and then merge them back in with iolist_to_binary
. The call to binary:match/2
finds the xcoordinate of the first entry point, and we enter the grid in the down
direction.
The handling of each character is fairly straightforward:
dia19(Dir, Grid, {X,Y}, Width, Seen) > case binary:at(Grid, X+Y*Width) of $ > dia19(Dir, Grid, next19(Dir, {X,Y}), Width, Seen); $ > dia19(Dir, Grid, next19(Dir, {X,Y}), Width, Seen); $+ > NextDir = branch19(Dir, Grid, {X,Y}, Width), dia19(NextDir, Grid, next19(NextDir, {X,Y}), Width, Seen); Char when Char >= $A, Char =< $Z > dia19(Dir, Grid, next19(Dir, {X,Y}), Width, [CharSeen]); _ > % done lists:reverse(Seen) end.
If we encounter either a 
or 
, we keep going on our merry way. When we encounter a +
, we must find which direction to switch to for the next iteration. Whenever we encounter a character, we add it to a list of seen values. Anything else is whitespace. Whenever we encounter it, we have to assume we'd be at the end of the grid. The little gotcha with this approach here is that if the diagram is to end on a boundary (0 or max on either axis) and that there's a diagram on the other end also finishing on the edge, we'll loop forever. Fortunately, the test input already had padding for this, so we don't have to add it.
The helper functions for navigation are:
next19(down, {X,Y}) > {X,Y+1}; next19(up, {X,Y}) > {X,Y1}; next19(left, {X,Y}) > {X1,Y}; next19(right, {X,Y}) > {X+1,Y}. branch19(Dir, Grid, {X,Y}, Width) when Dir == down; Dir == up > case binary:at(Grid, (X1)+Y*Width) of $\s > right; _ > left end; branch19(Dir, Grid, {X,Y}, Width) when Dir == left; Dir == right > case binary:at(Grid, X+(Y1)*Width) of $\s > down; _ > up end.
They're all straightforward. The next19
function just moves the coordinates around, and the branch19
function checks for which of the direction perpendicular to the current one has a nonwhitespace character; that's where we're headed. Since it has to be one or the other, we only need to test if there's a character to the left when switching left or right, or if there's a character above when switching up or down.
This is all that's needed to go through the whole diagram gathering all letters.
Part 2
Part 2 simply asks us to count how many steps are to be made. We can't just count nonwhitespace character since at a crossroads, the same character may be used more than once. Instead, it's simple to adapt the solution from part 1 to do what we need:
day19_p2(String) > ... dia19_ct(down, Grid, {Start, 0}, Width, 0). dia19_ct(Dir, Grid, {X,Y}, Width, Steps) > case binary:at(Grid, X+Y*Width) of $ > dia19_ct(Dir, Grid, next19(Dir, {X,Y}), Width, Steps+1); $ > dia19_ct(Dir, Grid, next19(Dir, {X,Y}), Width, Steps+1); $+ > NextDir = branch19(Dir, Grid, {X,Y}, Width), dia19_ct(NextDir, Grid, next19(NextDir, {X,Y}), Width, Steps+1); Char when Char >= $A, Char =< $Z > dia19_ct(Dir, Grid, next19(Dir, {X,Y}), Width, Steps+1); _ > % done Steps end.
The only change is that the Seen
list is replaced by a Steps
counter. Problem solved.
Day 20
Part 1
We are given a list of the form:
p=, v=, a=<4,24,11> p=, v=, a=<14,10,21> p=, v=, a=<14,12,24> ... 235,226,350>1648,1562,2437>179,200,308>1266,1460,2161>119,454,170>837,3170,1198>
Each row represents a point 0..N
. For each point, p
is a starting position, v
is a given velocity, and a
is an acceleration. At each step of a simulation, each value of acceleration is added to its matching velocity, and then each value of the new velocity is applied to its matching position, moving the point.
Part 1 asks us to find over time, which point is going to be remaining closest to <0,0,0>
.
The trick here is that over time, acceleration is going to dominate everything. The faster you increase velocity, the further away you'll eventually end. If two accelerations are the same, then the position of the starting point can act as a tiebreaker. If we had a tie there, then the initial velocity could act as a tie breaker as well. No need to simulate.
Each row can be parsed rather quite simply:
parse20(N, Row) > [A,B,C,D,E,F,G,H,I] = [list_to_integer(X)  X < string:lexemes(Row, "pva=<>, ")], {N, {A,B,C}, {D,E,F}, {G,H,I}}.
And putting it all together:
day20_p1(String) > {_,Points} = lists:foldl(fun(Row, {N,Acc}) > {N+1, [parse20(N, Row)Acc]} end, {0,[]}, string:lexemes(String, "\n")), %% sort by accel, tiebreaker is the starting distance Acc = [{abs_sum(A), abs_sum(P), abs_sum(V), N}  {N,P,V,A} < Points], element(4, hd(lists:sort(Acc))). abs_sum({X,Y,Z}) > abs(X)+abs(Y)+abs(Z).
The first expression puts all the lines in a list. The second one builds a list with all the absolute sums in the order we want them (acceleration, starting point, velocity) and the last one picks the number of the point that is closest to 0. Problem solved.
Part 2
Part 2 asks us instead to find points that collide with each other and remove them from the list. Once all the collisions are accounted for, give the number of points left.
I suspect there is a formula to get a result rapidly by comparing the curves and finding colliding points, but frankly the math goes a little bit above my head here, especially since we'd need to cull points with early collisions. Another approach, given we know acceleration will always increase, would be to simulate all steps until all points start getting further and further away from each other. This is however not cheap (O(n²)
).
Because all points are exponentially moving faster and faster, it's a relatively safe bet to think that after a given number of steps (say a thousand or ten thousands), the chance that points will be converging is drastically lower. We can just simulate our way in a brute force:
day20_p2(String) > {_,Points} = lists:foldl(fun(Row, {N,Acc}) > {N+1, [parse20(N, Row)Acc]} end, {0,[]}, string:lexemes(String, "\n")), length(clear_collisions20(1000, Points)). clear_collisions20(0, Points) > Points; clear_collisions20(N, Points) > Remaining = clear20(undefined, lists:keysort(2,Points)), clear_collisions20(N1, [incr20(P)  P < Remaining]).
The function sorts all the points according to their current position, and then compares them to find if there's any collision. We remove all colliding points, and then step through the simulation with incr20/1
. These functions as defined as follows:
clear20(_, []) > []; clear20(_, [{_, P, _, _}, {_, P, _, _}  T]) > clear20(P, T); clear20(Prev, [{_, Prev, _, _}T]) > clear20(Prev, T); clear20(Prev, [HT]) > [Hclear20(Prev, T)]. incr20({N, {A,B,C}, {D,E,F}, {G,H,I}}) > {N, {A+D+G,B+E+H,C+F+I}, {D+G,E+H,F+I}, {G,H,I}}.
The clearing function works a bit in a funky way. If two sequential points are the same (clause 2), we remove them. We also note the value the point had, in case there was a third (or a fifth, or any odd number) point matching the previous one. This is stored in the Prev
value, which clause 3 checks against. Otherwise, we assume the points are distinct.
The increment function just adds all the required points together.
Running the program with multiple values finds the right answer, which the website accepts.
Day 21
Part 1
For me this has been the hardest one of the advent so far. We're given a pattern of this form:
.#. ..# ###
And are given rules of the form:
../.# => ##./#../... .#./..#/### => #..#/..../..../#..# [...]
Where each /
represents a linebreak in the pattern. All rules have a pattern on the left with either 2x2 or 3x3 squares in them. The initial pattern is a grid of 3x3, and the rules for a 3x3 grid expand to a 4x4 grid. The rules for a 2x2 square always expand to a 3x3 square. They also dub the rules 'enhancements', which I'll use in code as a term, along with 'patterns'.
The rules can be applied repeatedly by following these steps:
 If the size is evenly divisible by 2, break the pixels up into 2x2 squares, and convert each 2x2 square
 Otherwise, the size is evenly divisible by 3; break the pixels up into 3x3 squares, and convert each 3x3 square
Moreover, the pattern rules can be rotated and flipped, meaning that:
.#. .#. #.. ### ..# #.. #.# ..# ### ### ##. .#.
Are all equivalent.
The problem asks us to find how many #
symbols will remain after 5 iterations.
So uh, first things first, we'll represent things with bit sequences, where 1
stands for #
and 0
stands for .
. We'll put them in a long binary and still skip around using using X and Y coordinates like we've done in other exercises, and use these patterns to look up specific squares. Since the grid is evergrowing, we'll have to devise an algorithm to merge the squeres into a final binary sequence. Without knowing how to do this yet, we should pick a format for our configuration.
Since I suspect it will be simple to just look up a square and extract it as a sequence of coordinates, but that we will likely need to merge squares row by row, I've decided to split up the configuration as a map of entries like:
../.# => ##./#../... [0,0,0,1] => [<>, <>, <>] 0:3>1:1,0:2>1:1,1:1,0:0>
Here's the setup:
day21_p1(String) > Patterns = lists:foldl(fun(Line, Map) > flip_pattern(parse21(Line), Map) end, #{}, string:lexemes(String, "\n")), Init = {<<0:1,1:1,0:1, 0:1,0:1,1:1, 1:1,1:1,1:1>>, 3}, run21(5, Init, Patterns).
So we break the lines up, read individual instructions, then expand them into their own variations (rotations and flips) and then store them in a map, before running the whole thing. Here's the line parsing:
parse21(Line) > [Pattern, Result] = string:lexemes(Line, " => "), {[[char21(Char)  Char < Str]  Str < string:lexemes(Pattern, "/")], [<< <<(char21(Char)):1>>  Char < Str >>  Str < string:lexemes(Result, "/")]}. char21($#) > 1; char21($.) > 0.
Now for the pattern variations. Here I start by getting the 4 possible orientations (rotations) of a pattern, and then make a mirror of each of them for the flip step. This gives me a list of modified patterns for each original pattern, which I stick into a map:
flip_pattern({Pattern, Res}, Map) > NewMap = maps:from_list( lists:flatten([[{lists:append(Pat), Res}  Pat < flip_patterns(Rot)]  Rot < get_rotations(Pattern)]) ), maps:merge(Map, NewMap).
The rotations are kind of straight forward, I just made them happen visually with pattern matching:
get_rotations(Pattern) > [Pattern, rot90(Pattern), rot90(rot90(Pattern)), rot90(rot90(rot90(Pattern)))]. rot90([[A,B], [C,D]]) > [[B,D], [A,C]]; rot90([[A,B,C], [D,E,F], [G,H,I]]) > [[C,F,I], [B,E,H], [A,D,G]].
Flipping the pattern just requires to reverse each of the values of each of its rows:
flip_patterns(L1) > L2 = [lists:reverse(Row)  Row < L1], [L1,L2].
Now for the running:
run21(0, {Bin,_}, _) > lists:sum([1  <<1:1>> <= Bin]); run21(N, {Bin,Size}, Map) > Div = if Size rem 2 == 0 > 2 % I had initially flipped these 2 clauses ; Size rem 3 == 0 > 3 % and spent a real long time debugging it end, Squares = [extract21(square(Nth,Size,Div), Bin)  Nth < lists:seq(0, (Size*Size) div (Div*Div)1)], NewBin = squares_to_bin([enhance(Square, Map)  Square < Squares]), run21(N1, {NewBin, trunc(math:sqrt(bit_size(NewBin)))}, Map).
We iterate based on the counter N
. We divide the map according to its size at each run, and keep that divisor Div
, since it lets us extract how many squares (and of which size) there will be. The second expression does just that. The lists:seq(0, (Size*Size) div (Div*Div)1)
will generate a list of incrementing integers, each representing a square. A 6x6 grid would have 4 squares that are 3x3, so the list generated would be [0,1,2,3]
. The square/3
function should return the position of each value. For example, in the grid:
0110 1100 + 1001 0111
The first 2x2 square would have the sequence of coordinates [0,1,4,5]
, and the 4th one would have [10,11,14,15]
. These values are then looked up directly by position in the binary by the function extract21/2
, which would give us the square values [[0,1,1,1], [1,0,0,0], [1,0,0,1], [0,1,1,1]]
. The ordering here will prove to be important as well.
The function enhance
will look up the patterns in the map and expand them up. So if the first 2x2 square results in a 3x3 square, the end list will be a list of the form [Square1=[<<_:3>>, <<_:3>>, <<_:3>>], ...]
. This list of squares is then merged into a flat bitstring by squares_to_bin()
.
Let's look at the individual functions. First, the square positions:
square(N, GridSize, Size) > XOffset = (Size * N) rem GridSize, % which col offset are we at YOffset = ((Size * N) div GridSize) * Size, % how many filled rows are there [Y*GridSize+X  Y < lists:seq(YOffset, YOffset+Size1), X < lists:seq(XOffset, XOffset+Size1)].
This requires a bit of fudging to come up with, but the comments exlain what the code does. The list comprehension then generates the set of {X,Y}
coordinates in the proper order for each square position (N
), which are flattened to an absolute position on the sequence (0..GridSize²
). These positions are looked up with the following code:
extract21(Coords, Bin) > [bit_at(Bin, Coord)  Coord < Coords]. bit_at(Bin, Pos) > <<_:Pos/bits, Bit:1, _/bits>> = Bin, Bit.
The Erlang/OTP code for the binary
module won't work on bitstrings so bit_at/2
is the equivalent of binary:at/2
in the standard library. This gives us a sequence that represents a whole pattern, which can be matched on the patterns map:
enhance(Bin, Map) > maps:get(Bin, Map).
We're then left with filling in the code to merge all the squares back into one single bit sequence. Let's take a look at the following grid and its matching Erlang representation:
1 2 0110 [[[0,1],[1,1]], % 1 1100 [[1,0],[0,0]], % 2 + => 1001 [[1,0],[0,1]], % 3 0111 [[0,1],[1,1]]] % 4 3 4
To merge them, we must therefore take the first row of all the first row of squares, then the second row of the first row of squares, and then repeat this for every row of every square in every row of squares (yeah that doesn't sound super clear).
So let's get it done, it may be clearer with code:
squares_to_bin(Squares) > Side = trunc(math:sqrt(length(Squares))), Rows = [squares_to_bin(Row, [])  Row < bucket(Squares, Side)], << <<Row/bits>>  Row < Rows >>.
Since we're operating on lists of squares, a list with 4 squares in it has 2 squares per row. With a list of 36 squares, we'd have 6 quares per row. It's a square root giving us the side; we're just splitting up all the squares evenly.
The bucket/2
function does this by building the 'rows of squares' we'll need:
bucket([], _) > []; bucket(List, N) > {H, T} = lists:split(N, List), [H  bucket(T, N)].
Each row is then turned to a binary:
squares_to_bin([[]_], _) > <<>>; squares_to_bin([], Acc) > squares_to_bin(lists:reverse(Acc), []); squares_to_bin([[RowSq]  T], Acc) > Tail = squares_to_bin(T, [SqAcc]), << Row/bits, Tail/bits >>.
This function works by grabbing the first sequence of each square in a line and putting them together, and then repeating. With all the functions for square merging together, we essentially do the following:
[[[0,1],[1,1]], [[1,0],[0,0]], [[1,0],[0,1]], [[0,1],[1,1]]]  bucket/2 v [[[[0,1],[1,1]], [[1,0],[0,0]]] % row 1 [[[1,0],[0,1]], [[0,1],[1,1]]]] % row 2  squares_to_bin/2 v 0110 ++ [[1,1],[0,0]] ++ squares_to_bin([[[1,0],[0,1]], [[0,1],[1,1]]])  v 01101100 ++ squares_to_bin([[[1,0],[0,1]], [[0,1],[1,1]]])  v 0110110010010111
Which is equivalent to
0110 1100 1001 0111
The code can just iterate over and over again and count the 1s required.
Part 2
Part 2 just asked us to give the result after 18 iterations. Fortunately, this confusing solution worked fine for this as well.
Day 22
Part 1
The description of the problem gives us a map like this:
..# #.. ...
The map is infinite, but we only know about a limited part of it, where a .
means a node is clean
and a #
means it is infected
. All nodes not yet discovered in the map (as we expand it) is considered to be clean
. We have a cursor at the center of the map ({0,0}
), facing the direction up
(out of up, down, left, right). At each 'burst', we do three things:
 Orient the cursor; it turns left if the current position is infected, and right if clean.
 Update the current position's state, by flipping infected to clean, and clean to infected.
 Move forward in the new direction the cursor is now facing.
This is done for 10,000 bursts, after which we should give the number of infections that have taken place.
This is another problem using a grid of states, although this one is an infinite grid starting from the center. This means we'll have to use maps at the origin {0,0}
to track and update state, since we can't otherwise represent coordinates on an evergrowing binary stream.
So given we have a square map as an input, what we should do is find the middle point, and declare it {0,0}
. That middle point can be considered to be the offset of the map: if we have 25 columns and rows (0..24), then the central one will be at the 12th column and row, so the first point we'll ever map will be {12,12}
. Using these offsets, we'll then stick every value in an Erlang map that will be navigable.
day22_p1(String, Bursts) > % assume a square Rows = string:lexemes(String, "\n"), Side = length(Rows), Mid = Side div 2, Offset = Mid, Map = build22_1(Rows, Offset, {Offset,Offset}, #{}), run22_1({0,0}, up, Map, Bursts, 0). build22_1([], _, _, Map) > Map; build22_1([[]T], Offset, {_,Y}, Map) > build22_1(T, Offset, {Offset,Y+1}, Map); build22_1([[HT]Rows], Offset, {X,Y}, Map) > Type = case H of $. > clean; $# > infected end, build22_1([TRows], Offset, {X+1,Y}, Map#{ {X,Y} => Type}).
With the map built, we can start at {0,0}
, facing upwards, and implementing the tiny state machine:
run22_1(_Pos, _Dir, _Map, 0, Acc) > Acc; run22_1(Pos, Dir, Map, N, Acc) > case maps:get(Pos, Map, clean) of infected > NewDir = turn_right(Dir), NewMap = Map#{Pos => clean}, run22_1(next(NewDir, Pos), NewDir, NewMap, N1, Acc); clean > NewDir = turn_left(Dir), NewMap = Map#{Pos => infected}, run22_1(next(NewDir, Pos), NewDir, NewMap, N1, Acc+1) end.
This is a rather straightforward encoding of the rules. Here are the helper functions:
turn_right(up) > right; turn_right(right) > down; turn_right(down) > left; turn_right(left) > up. turn_left(up) > left; turn_left(left) > down; turn_left(down) > right; turn_left(right) > up. next(up, {X,Y}) > {X,Y1}; next(down, {X,Y}) > {X,Y+1}; next(left, {X,Y}) > {X1,Y}; next(right, {X,Y}) > {X+1,Y}.
This solves the problem.
Part 2
Part 2 basically adds 2 possible states: flagged
(represented by a F
on the map), and weakened
(W
on the map). On a flagged node, the cursor turns around. On a weakened one, it keeps going the same way.
On top of that, the state transitions changed. Rather than:
clean > infected ^'
We now have:
clean > weakened > infected ^ flagged <'
We still have to count the infections that have taken place, but we now have to do it over 10,000,000 bursts rather than 10,000.
This is fairly straightforward to adapt from the previous code; we just need to add the new clauses everywhere:
day22_p2(String, Bursts) > % assume a square Rows = string:lexemes(String, "\n"), Side = length(Rows), Mid = Side div 2, Offset = Mid, Map = build22_2(Rows, Offset, {Offset,Offset}, #{}), run22_2({0,0}, up, Map, Bursts, 0). build22_2([], _, _, Map) > Map; build22_2([[]T], Offset, {_,Y}, Map) > build22_2(T, Offset, {Offset,Y+1}, Map); build22_2([[HT]Rows], Offset, {X,Y}, Map) > Type = case H of $. > clean; $# > infected; $F > flagged; $W > weakened end, build22_2([TRows], Offset, {X+1,Y}, Map#{ {X,Y} => Type}).
Here I just added the two new states and renamed functions. Next we adapt the state machine:
run22_2(_Pos, _Dir, _Map, 0, Acc) > Acc; run22_2(Pos, Dir, Map, N, Acc) > case maps:get(Pos, Map, clean) of infected > NewDir = turn_right(Dir), NewMap = Map#{Pos => flagged}, run22_2(next(NewDir, Pos), NewDir, NewMap, N1, Acc); clean > NewDir = turn_left(Dir), NewMap = Map#{Pos => weakened}, run22_2(next(NewDir, Pos), NewDir, NewMap, N1, Acc); flagged > NewDir = turn_left(turn_left(Dir)), % turn back NewMap = Map#{Pos => clean}, run22_2(next(NewDir, Pos), NewDir, NewMap, N1, Acc); weakened > NewMap = Map#{Pos => infected}, run22_2(next(Dir, Pos), Dir, NewMap, N1, Acc+1) end.
And just like that it works. It takes a bit longer to run, but after a few seconds, we get the result we need.
Day 23
Part 1
Part 1 runs the same as Day 18, but with a new instruction set

set X Y
sets registerX
to the value ofY
. 
sub X Y
decreases registerX
by the value ofY
. 
mul X Y
sets registerX
to the result of multiplying the value contained in registerX
by the value ofY
. 
jnz X Y
jumps with an offset of the value ofY
, but only if the value ofX
is not zero. (An offset of 2 skips the next instruction, an offset of 1 jumps to the previous instruction, and so on.)
Just readapting the code works pretty well:
day23_p1(String) > Inst = [parse18(S)  S < string:lexemes(String, "\n")], exec23_1([], Inst, #{}, 0). exec23_1(_, [], _, Acc) > Acc; exec23_1(Prev, [{set, [X,Y]}=HT], Map, Acc) > exec23_1([HPrev], T, Map#{X => reg_or_val(Map, Y)}, Acc); exec23_1(Prev, [{sub, [X,Y]}=HT], Map, Acc) > exec23_1([HPrev], T, Map#{X => reg_or_val(Map, X)  reg_or_val(Map, Y)}, Acc); exec23_1(Prev, [{mul, [X,Y]}=HT], Map, Acc) > exec23_1([HPrev], T, Map#{X => reg_or_val(Map, X) * reg_or_val(Map, Y)}, Acc+1); exec23_1(Prev, [{jnz, [X, Y]}=HT], Map, Acc) > case reg_or_val(Map, X) =/= 0 of false > exec23_1([HPrev], T, Map, Acc); true > {NewPrev,NewNext} = rewind23({Prev,[HT]}, reg_or_val(Map, Y)), exec23_1(NewPrev, NewNext, Map, Acc) end. rewind23(Zipper, 0) > Zipper; rewind23({[HT], Next}, N) when N < 0 > rewind18({T, [HNext]}, N+1); rewind23({Prev, [HT]}, N) when N > 0 > rewind18({[HPrev], T}, N1); rewind23({_,_}, _) > {[],[]}. % off the end, give up
The functions have been slightly modified so that if things end up off the instruction set, the code stops.
Part 2
Part 2 has seriously been the least clear part here to me. We're given little information: just that the program must start with the value of register "a"
set to 1, and then it will take forever:
You'll need to optimize the program if it has any hope of completing before Santa needs that printer working.
The coprocessor's ultimate goal is to determine the final value left in register h once the program completes. Technically, if it had that... it wouldn't even need to run the program.
After setting register
a
to 1, if the program were to run to completion, what value would be left in registerh
?
After a while, I realized that what they wanted was nothing specific with the interpreter, but we had to read the assembler they gave, reverse engineer the program it contains, and then figure out what it needs to do, and come up with a more efficient solution to it. I'm not too thrilled about that.
Here's the program I was given as input:
set b 57 set c b jnz a 2 jnz 1 5 mul b 100 sub b 100000 set c b sub c 17000 set f 1 set d 2 set e 2 set g d mul g e sub g b jnz g 2 set f 0 sub e 1 set g e sub g b jnz g 8 sub d 1 set g d sub g b jnz g 13 jnz f 2 sub h 1 set g b sub g c jnz g 2 jnz 1 3 sub b 17 jnz 1 23
I translated that to some form of pseudocode with a bunch of mistakes in it, and refactored the bad code to find out about a kind of dual looping structure:
day23_p2_pseudo() > B=105700, C=122700, day23_while(B,C,2,2,0). day23_while(B,C,D,E,F,H) > F = D*EB == 0, E2 = E+1, if E2B =/= 0 > day23_while(B,C,D,E2,F2,H); (D+1)B =/= 0 > day23_while(B,C,D+1,E2,F2,H); true > H2 = if F2 == 0 > H+1; true > H end, case BC of 0 > H2; _ > B2=B+17, day23_while(B2,C,D+1,E2,1,H2) end end.
With this in mind I went back to the instruction set and tried to make everything work with a reasonable program, with variable names, to understand what happens. With a few tracings with recon_trace
, I extracted the following:
day23_p2() > day23_p2(105700, 122700, 2, 2, 1, 0). day23_p2(From, To, D, E, Divides, Acc) > %% cycle all E to see if 2xE == From, Flag = if D*E  From == 0 > 0 ; true > Divides end, E2=E+1, if E2From =/= 0 > day23_p2(From, To, D, E2, Flag, Acc) % if it doesn't divide, go up ; true > D2=D+1, % if it did divide, then increment the check if D2From =/= 0 > day23_p2(From, To, D2, E2, Flag, Acc) ; true > % once all the values have been checked? % if the number divides, then count up Add = if Flag == 0 > 1; true > 0 end, if FromTo =/= 0 > day23_p2(From+17,To, D2,E2, 1, Acc+Add) ; FromTo == 0 > day23_p2(From,To,2,2, 1,Acc+Add) end end end.
The function just looks for a count of all the nonprime numbers between 105700 and 122700 when incrementing by steps of 17!
day23_p2_optimized() > length([1  X < lists:seq(105700,122700,17), not is_prime(X)]).
All we need is an efficient way to calculate prime numbers. Rather than using a thing like the sieve of Eratosthenes, I had an implementation of the solution to Project Euler 7 that looks like this:
is_prime(N) when N > 0 > if N == 1 > false; % 1 is not prime N < 4 > true; % 2 and 3 are prime N rem 2 == 0 > false; % 2 is the only even number to be prime N < 9 > true; % 5 and 7 are prime N rem 3 == 0 > false; % eliminate lots of uneven numbers true > Limit = trunc(math:sqrt(N)), % no prime divisor ever higher than sqrt(N) K = 5, % when N < 3, primes are at 6k±1 interval is_prime(N, K, Limit) end; is_prime(_) > false. % Over the sqrt(N) limit for 6k±1 limit. Is prime! is_prime(_, K, Limit) when K > Limit > true; % If N % 6k±1 under while k < sqrt(N), not a prime is_prime(N, K, Limit) > if N rem (K) == 0 > false; N rem (K+2) == 0 > false; true > is_prime(N, K+6, Limit) end.
Since all prime numbers above 3 are right next to a number that has to be divided by 6, the solution iterates all of them until it can isolate a prime. I did not come up with the solution, it's the one project euler gave as a closetooptimal solution in the guide open once you solve the problem, but it works very well for this challenge.
Day 24
Part 1
This is a variation on dominos. We're given pairs of the form:
0/2 2/2 2/3 3/4 3/5 0/1 10/1 9/10
We start a chain at 0, and can only connect elements that have the same number as the current one, making a 'bridge'. Sample bridges include:
0/22/22/3 0/22/22/33/4 0/22/22/33/5 0/110/19/10
That is to say, we are free to flip the pairs, as long as they have the same number connecting them, we're good. The problem asks us to find the bridge we can build that has the highest value. In the sample set of pairs, the highest value is 31: 0/110/19/10
with 0+1+1+10+10+9
.
The solution can be found with a depthfirst search. At each level, extract the pairs that match the current value, and try all possible combinations. Then do the same starting with the child. At each level, keep the highest value seen so far. As we recurse back up, we'll have the max value handled.
But first things first, the parsing:
day24_p1(String) > %% All pairs are unique! Pairs = [[list_to_integer(N)  N < string:lexemes(Row, "/")]  Row < string:lexemes(String, "\n")], search24_1(0, Pairs, 0).
I break up pairs like 3/4
into lists [3,4]
. Then it's straight to the search:
search24_1(X, Pairs, Value) > {Match, NoMatch} = lists:partition(fun([A,B]) > A==X orelse B==X end, Pairs), Combinations = [{Pair, (Match[Pair])++NoMatch}  Pair < Match], case Combinations of [] > Value; _ > lists:max([begin [NextX] = Pair  [X], search24_1(NextX, Next, Value+lists:sum(Pair)) end  {Pair, Next} < Combinations]) end.
First, the candidate tiles that match the current value (X
) are obtained through calling lists:partition
. For each of these, the possible combinations are generated. Note the expression (Match[Pair])++NoMatch
. Basically, for each candidate pair, I regenerate a list of all nonmatching pairs and all the matching pairs except the current one.
If no matching combinations exist, the function returns the value since there is no way to move ahead and the chain is complete. If we have candidate combinations, each of them is tried recursively. The value that needs to match next (NextX
) is passed, around with the Next
pairs to try. The sum of the current pair is added to the value so far, and recursion takes care of the rest.
I was worried that trying all combinations would be computationally costly, but the input size with the limited number of candidates (it helps that no pair is duplicate) keeps things fast.
Part 2
Similar problem, but slightly different definition. Rather than returning the highest score of a chain, we're asked to return the score of the longest chain. If multiple chains can exist with the same length, return the highest of the scores.
Adapting our code is simple enough; since we always returned the max value at each iteration based on the score, we just need to keep doing the same thing, but on a tuple of {Length, Score}
. This will allow to always pick the longest chain, with the score as a tiebreaker:
day24_p2(String) > %% All pairs are unique! Pairs = [[list_to_integer(N)  N < string:lexemes(Row, "/")]  Row < string:lexemes(String, "\n")], {_Len, Strength} = search24_2(0, Pairs, {0,0}), Strength. search24_2(X, Pairs, Value={Len,Strength}) > {Match, NoMatch} = lists:partition(fun([A,B]) > A==X orelse B==X end, Pairs), Combinations = [{Pair, (Match[Pair])++NoMatch}  Pair < Match], case Combinations of [] > Value; _ > lists:max([begin [NextX] = Pair  [X], search24_2(NextX, Next, {Len+1, Strength+lists:sum(Pair)}) end  {Pair, Next} < Combinations]) end.
As you can see, this is a straight copy/paste with the only difference being in the accumulator handling.
Only one day to go!
Day 25
Part 1
We're given a statemachine specification to encode. The twist is that the state machine must be implemented using something like a turing tape: an infinite sequences of 0s and 1s, which we assume are initialized to 0.
State definitions go a bit like this:
In state A: If the current value is 0:  Write the value 1.  Move one slot to the right.  Continue with state B. If the current value is 1:  Write the value 0.  Move one slot to the left.  Continue with state B.
After a set number of steps, we must count the number of entries set to 1.
A zipper is once again the saving structure we can use:
day25(N) > run25(a, {[],[0]}, N). run25(_, {Prev, Next}, 0) > lists:sum(Prev)+lists:sum(Next); run25(a, {Prev, [0Next]}, N) > run25(b, shiftr({Prev, [1Next]}), N1); run25(a, {Prev, [1Next]}, N) > run25(f, shiftr({Prev, [0Next]}), N1); run25(b, {_, [0_]}=S, N) > run25(b, shiftl(S), N1); run25(b, {_, [1_]}=S, N) > run25(c, shiftl(S), N1); run25(c, {Prev, [0Next]}, N) > run25(d, shiftl({Prev, [1Next]}), N1); run25(c, {Prev, [1Next]}, N) > run25(c, shiftr({Prev, [0Next]}), N1); run25(d, {Prev, [0Next]}, N) > run25(e, shiftl({Prev, [1Next]}), N1); run25(d, {Prev, [1Next]}, N) > run25(d, shiftr({Prev, [1Next]}), N1); run25(e, {Prev, [0Next]}, N) > run25(f, shiftl({Prev, [1Next]}), N1); run25(e, {Prev, [1Next]}, N) > run25(d, shiftl({Prev, [0Next]}), N1); run25(f, {Prev, [0Next]}, N) > run25(a, shiftr({Prev, [1Next]}), N1); run25(f, {Prev, [1Next]}, N) > run25(e, shiftl({Prev, [0Next]}), N1).
This is is kind of straightforward, only the shifting left or right must be added:
shiftl({[],Next}) > {[], [0Next]}; shiftl({[HT],Next}) > {T, [HNext]}. shiftr({Prev,[H]}) > {[HPrev], [0]}; shiftr({Prev,[HT]}) > {[HPrev], T}.
Part 2
Part 2 is a freebie! It's a true Christmas miracle. Instead I decided to clean up the previous state machine into a generic and specific part.
The 'specific' part is the straightforward specification:
handle(a, 0) > {b, 1, right}; handle(a, 1) > {f, 0, right}; handle(b, 0) > {b, 0, left}; handle(b, 1) > {c, 1, left}; handle(c, 0) > {d, 1, left}; handle(c, 1) > {c, 0, right}; handle(d, 0) > {e, 1, left}; handle(d, 1) > {d, 1, right}; handle(e, 0) > {f, 1, left}; handle(e, 1) > {d, 0, left}; handle(f, 0) > {a, 1, right}; handle(f, 1) > {e, 0, left}.
The format is handle(CurrentState, CurrentValue) > {NextState, NextValue, NextShift}
. Here's the generic part that call this specification:
day25_gen(N) > run25_gen(a, {[],[0]}, N). run25_gen(_, {Prev, Next}, 0) > lists:sum(Prev) + lists:sum(Next); run25_gen(S, {Prev, [HNext]}, N) > case handle(S,H) of {NewS, NewH, left} > run25_gen(NewS, shiftl({Prev, [NewHNext]}), N1); {NewS, NewH, right} > run25_gen(NewS, shiftr({Prev, [NewHNext]}), N1) end.
This makes the overall system easier to manage and maintain.
]]>