Mathematics of the impossible, Chapter 12, Data structures
Thoughts 2023-05-14
And as promised, here’s a chapter on data structures… in a complexity theory course! — Data structures aim to maintain data in memory so as to be able to support various operations, such as answering queries about the data, and updating the data. The study of data structures is fundamental and extensive. We distinguish and study in turn two types of data structure problems: static and dynamic. In the former the input is given once and cannot modified by the queries. In the latter queries can modify the input; this includes classical problems such as supporting insert, search, and delete of keys.
12.1 Static data structures
Definition 12.1. A static data-structure problem is simply a function . A data structure for with space , word size and time is a way to write as
where , , and each output bit of depends on input words (we think of as divided into words of length ).
Here we have bits of input data about which we would like to answer queries. Often the queries and or the word size are boolean, i.e., . Another typical setting is . The data structure aims to accomplish this by storing the input into bits of memory. This map is arbitrary, with no bound on resources. But after that, each query can be answered very fast, by reading only words. In general, these words can be read adaptively. But for simplicity we focus on the case in which the locations are fixed by the data structure and the same for every input .
Exercise 12.1. Consider the data structure problem where and query is the parity of the input bits from to .
Give a data structure for this problem with , , and .
Exercise 12.2. Show that any data-structure problem has a data structure with and the following parameters:
(1) and , and
(2) and .
Exercise 12.3. Prove that for every and there exist functions s.t. any data structure with space and requires time .
By contrast, next we present the the best known impossibility result.
Definition 12.2. A function is -wise uniform if any output coordinates are uniform when the input to is uniform.
Theorem 12.1. [65] Let be -wise uniform. Let be a power of and . Then any data structure with using space (which recall is measured in words of bits) and time has:
Interpreting the input as coefficients of a degree univariate polynomial over and outputting its evaluations shows that such functions exists, and are in P. Below we give a surprising data structure that nearly matches the theorem.
To match previous parameters note that , and . Hence the bound is . Note that is the space of the data structure measured in bits. It follows that if is linear in then . This result remains non-trivial for slightly super-linear. But remarkably, if then nothing is known (for power in one only gets ).
Proof. The idea in the proof is to find a subset of less than memory cells that still allows us to answer queries. This is impossible, since we can’t generate uniform outputs from less than memory cells.
Let . Include each memory bit in with probability , independently. By Theorem 2.7, .
We are still able to answer a query if all of its memory bits fall in . The probability that this happens is . We now claim that with probability , we can still answer queries. Indeed, let be the number of queries we cannot answer. We have . And so
Thus, if the inequality holds then there exists a set of bits with which we can answer queries. Hence we reach a contradiction if
Next we show a conceptually simple data structure which nearly matches the lower bound. For simplicity we focus on data structures which use space – recall in this case the previous result does not give anything. We will show this is for good reasons, there are data structures where the time is constant. We will only show -wise independence, as opposed to -wise, but the proof techniques next and above generalize to other settings of parameters.
Theorem 12.2. There is a map which is -wise uniform and has a data structure with space and time , for any and which is a power of .
To give a sense of the parameters, let for example .
Proof. We fill the memory with evaluations of the input polynomial. Then we pick a random bipartite graph with nodes on the left and nodes on the right. Every node on the right side has degree . We answer each query by summing the corresponding cells in . Let . To show -wise uniformity it suffices to show that for any subset on the right-hand side of size , the sum of the corresponding memory cells is uniform in . For this in turn it suffices that has a unique neighbor. And for that, finally, it suffices that has a neighborhood of size greater than (because if every element in the neighborhood of has two neighbors in then has a neighborhood of size less than ).
Note here we are using that the neighborhood has size , and so the memory is -wise uniform.
We pick the graph at random and show that it has the latter property with non-zero probability. [Standard calculations follow that wordpress has trouble displaying… wait for the book draft, I guess.] QED
12.1.1 Succinct data structures
Succinct data structures are those where the space is close to the minimum, . Specifically, we let for some called redundancy. Unsurprisingly, stronger bounds can be probed in this setting. But, surprisingly, again these stronger bounds were shown to be tight. Moreover, it was shown that improving the bounds would imply stronger circuit lower bounds.
To illustrate, consider the ECC problem where is an error-correcting code (with constant relative distance) and is linear in .
Theorem 12.3. [26] Any data-structure for the ECC problem with using space requires time .
This is nearly matched by the following result.
Theorem 12.4. [81] There is an ECC problem s.t. for any it has a data structure with , space , and time .
Moreover, it was shown that proving a time lower bound of would imply new circuit lower bounds. The latter result refers to bounds on the number of wires in circuits with arbitrary gates. But the following connection with the standard circuit model is also known.
Theorem 12.5. [81] Let be a function computable by bounded fan-in circuits with wires and depth , for constants . Then has a data structure with space and time .
Hence, proving time lower bounds for succinct data structures would give functions that cannot be computed by linear-size log-depth circuits, cf. 9.3.
12.1.2 Succincter: The trits problem
In this section we present a cute and fundamental data-structure problem with a shocking and counterintuitive solution. The trits problem is to compute where on input “trits” (i.e., ternary elements) outputs their representations using two bits per trit.
Note that the input ranges over elements, and so the minimum space of the data structure is This will be our benchmark for space. One can encode the input to as before using bits without loss of generality, but the current choice simplifies the exposition.
Simple solutions:
- The simplest solution (cf. 12.2) to this problem is to use bits per . With such an encoding we can retrieve each by reading just bits (which is optimal). The space used is and we have linear redundancy.
- Another solution (cf. again 12.2) to this problem is what is called arithmetic coding: we think of the concatenated elements as forming a ternary number between and , and we write down its binary representation. To retrieve it seems we need to read all the input bits, but the space needed is optimal.
- For this and other problems, we can trade between these two extreme as follows. Group the ’s into blocks of . Encode each block with arithmetic coding. The retrieval time will be bits and the needed space will be (assuming divides ). This is block-wise arithmetic coding. It provides a power trade-off between retrieval time and redundancy. (Using number-theoretic results on logarithmic forms, one can show [80] that this last inequality is tight up to changing into .)
The shocking solution: An exponential (!) trade-off
We now present an exponential trade-off: retrieval time bits and redundancy . In particular, if we set , we get retrieval time and redundancy . Moreover, the bits read are all consecutive, so with word size this can be implemented in constant time. To repeat, we can encode the trits with constant redundancy and retrieve each in constant time. This solution can also be made dynamic.
Theorem 12.6. [53, 21] The trits problem has a data structure with space (i.e., redundancy ) and time , for any and with word size . For word wise the time is constant.
Next we present the proof.
Definition 12.3 (Encoding and redundancy). An encoding of a set into a set is a one-to-one (a.k.a. injective) map . The redundancy of the encoding is |.
The following lemma gives the building-block encoding we will use.
Lemma 12.1. For all sets and , there is an integer , a set and an encoding
such that (1) has redundancy , and (2) can be recovered just by reading the bits in .
Note that (1) says that . For (2) to hold we must have . Combining this with the previous expression we obtain . In particular we get that (in fact it will be the case that , but the looser bound is sufficient).
The basic idea for proving the lemma is to break into and then encode by using bits:
There is however a subtle point. If we insist on always having equal to, say, or some other quantity, then one can cook up sets that make us waste a lot (i.e., almost one bit) of space. The same of course happens in the more basic approach that just sets and encodes all of with bits. The main idea will be to “reason backwards,” i.e., we will first pick and then try to stuff as much as possible inside . Still, our choice of will make about .
Proof. Pick any two sets and , where without loss of generality. Define , and let . To simplify notation, define . Note
How much can we stuff into ? For a set of size , we can encode elements from in . The redundancy of such an encoding can be bounded as follows:
To calculate the total redundancy, we still need to examine the encoding from to . Choose of size , so that this encoding is possible. With a calculation similar to the previous one, we see that the redundancy is:
The total redundancy is then , which gives (1).
For (2), it is clear from the construction that any can be recovered from the element of only. QED
Proof of Theorem 12.6. Break the ternary elements into blocks of size : , where for all . The encoding, illustrated in Figure 1, is constructed as follows, where we use to refer to the encoding guaranteed by Lemma Lemma 12.1.
Compute .
For compute .
Encode in binary as using arithmetic coding.
The final encoding is . We now compute the redundancy and retrieval time.
Redundancy: From (1) in Lemma 12.1, the first encodings have redundancy . For the last (arithmetic) encoding, the redundancy is at most . So the total redundancy is at most . One can visualize this as a “hybrid argument” transforming a product of blocks of ternary elements into a product of blocks of binary elements, one block at the time.
Retrieval Time: Say that we want to recover some which is in block . To recover block , Lemma 12.1 guarantees that we only need to look at and . This is because can be recovered by reading only , and can be recovered by reading and . Thus to complete the proof it suffices to show that each has length .
This is not completely obvious because one might have thought that the become larger and larger, and so we apply the lemma to larger and larger inputs and the get large too. However, recall that each from the comment after the statement of Lemma 12.1. Hence, every time we apply the lemma on an input of size at most . Since the lemma wastes little entropy (by (1) in Lemma 12.1), none of its outputs can be much larger than its input, and so .
QED
12.2 Dynamic data structures
We now study dynamic data structures. As we mentioned, here the input is not fixed but can be modified by the queries.
Definition 12.4. Fix an error-correcting code where and for any in . Here is the relative distance, the fraction of bit positions where and differ.
The problem asks to support operations, starting with the all-zero message:
for and which sets bit of the message to , and
for which returns bit of the codeword corresponding to the current message.
The time of a dynamic data structure is the maximum number of read/write operations in memory cells required to support an operation.
One might wonder if stronger bounds can be shown for this problem. But in fact there exist codes for which the bounds are nearly tight.
Theorem 12.8. [81]There exists codes for which the ECC problem can be solved in time with cell size .
The technique in the proof of Theorem 12.7 is from [24] and can be applied to many other natural problems, leading to tight results in several cases, see Exercise ??. It is not far from the state-of-the art in this area, which is [42].
Proof of Theorem 12.7. Pick uniformly and uniformly, and consider the sequence of operations
That is, we set the message to a uniform one bit at a time, and then ask for a uniformly selected bit of the codeword , which we also denote by .
We divide the operations into consecutive blocks, called epochs. Epoch consists of operations. Hence we can have at least epochs, and we can assume that we have exactly this many epochs (by discarding some bits of if necessary).
The geometrically decaying size of epochs is chosen so that the number of message bits set during an epoch is much more than all the cells written by the data structure in future epochs.
A key idea of the proof is to see what happens when the cells written during a certain epoch are ignored, or reverted to their contents right before the epoch. Specifically, after the execution of the operations, we can associate to each memory cell the last epoch during which this cell was written. Let denote the memory cells of the data structure after the first operations , but with the change that the cells that were written last during epoch are replaced with their contents right before epoch . Define to be the result of the data structure algorithm for on , and .
Let equal if , executed after the first operations , reads a cell that was last written in epoch , and otherwise. We have
where the last inequality holds because implies .
We now claim that if then for every . This concludes the proof.
In the remainder we justify the claim. Fix arbitrarily the bits of set before Epoch . For a uniform setting of the remaining bits of , note that the message ranges over at least
codewords. On the other hand, we claim that ranges over much fewer strings. Indeed, the total number of cells written in all epochs after is at most
We can describe all these cells by writing down their indices and contents using bits. Note that this information can depend on the operations performed during Epoch , but the point is that it takes few possible values overall. Since the cells last changed during Epoch are reverted to their contents before Epoch , this information suffices to describe , and hence . Therefore, ranges over strings.
For each string in the range of at most two codewords can have relative distance , for else you’d have two codewords at distance , violating the distance of the code.
Hence except with probability over , we have . If then the first probability is , and so , proving the claim. QED
12.3 Notes
The exposition of the trits problem is from [76].
References
[1] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Tight hardness results for LCS and other sequence similarity measures. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 59–78. IEEE Computer Society, 2015.
[2] Leonard Adleman. Two theorems on random polynomial time. In 19th IEEE Symp. on Foundations of Computer Science (FOCS), pages 75–83. 1978.
[3] Mikl�s Ajtai. -formulae on finite structures. Annals of Pure and Applied Logic, 24(1):1–48, 1983.
[4] Mikl�s Ajtai. A non-linear time lower bound for boolean branching programs. Theory of Computing, 1(1):149–176, 2005.
[5] Eric Allender. A note on the power of threshold circuits. In 30th Symposium on Foundations of Computer Science, pages 580–584, Research Triangle Park, North Carolina, 30 October–1 November 1989. IEEE.
[6] Eric Allender. The division breakthroughs. Bulletin of the EATCS, 74:61–77, 2001.
[7] Eric Allender and Michal Koucký. Amplifying lower bounds by means of self-reducibility. J. of the ACM, 57(3), 2010.
[8] Noga Alon, Oded Goldreich, Johan H�stad, and Ren� Peralta. Simple constructions of almost -wise independent random variables. Random Structures & Algorithms, 3(3):289–304, 1992.
[9] Dana Angluin and Leslie G. Valiant. Fast probabilistic algorithms for hamiltonian circuits and matchings. J. Comput. Syst. Sci., 18(2):155–193, 1979.
[10] Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. Proof verification and the hardness of approximation problems. J. of the ACM, 45(3):501–555, May 1998.
[11] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). SIAM J. Comput., 47(3):1087–1097, 2018.
[12] Kenneth E. Batcher. Sorting networks and their applications. In AFIPS Spring Joint Computing Conference, volume 32, pages 307–314, 1968.
[13] Paul Beame, Stephen A. Cook, and H. James Hoover. Log depth circuits for division and related problems. SIAM J. Comput., 15(4):994–1003, 1986.
[14] Paul Beame, Michael Saks, Xiaodong Sun, and Erik Vee. Time-space trade-off lower bounds for randomized computation of decision problems. J. of the ACM, 50(2):154–195, 2003.
[15] Michael Ben-Or and Richard Cleve. Computing algebraic formulas using a constant number of registers. SIAM J. on Computing, 21(1):54–58, 1992.
[16] Samuel R. Buss and Ryan Williams. Limits on alternation trading proofs for time-space lower bounds. Comput. Complex., 24(3):533–600, 2015.
[17] Lijie Chen and Roei Tell. Bootstrapping results for threshold circuits “just beyond” known lower bounds. In Moses Charikar and Edith Cohen, editors, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 34–41. ACM, 2019.
[18] Richard Cleve. Towards optimal simulations of formulas by bounded-width programs. Computational Complexity, 1:91–105, 1991.
[19] Stephen A. Cook. A hierarchy for nondeterministic time complexity. J. of Computer and System Sciences, 7(4):343–353, 1973.
[20] L. Csanky. Fast parallel matrix inversion algorithms. SIAM J. Comput., 5(4):618–623, 1976.
[21] Yevgeniy Dodis, Mihai Pǎtraşcu, and Mikkel Thorup. Changing base without losing space. In 42nd ACM Symp. on the Theory of Computing (STOC), pages 593–602. ACM, 2010.
[22] Lance Fortnow. Time-space tradeoffs for satisfiability. J. Comput. Syst. Sci., 60(2):337–353, 2000.
[23] Aviezri S. Fraenkel and David Lichtenstein. Computing a perfect strategy for n x n chess requires time exponential in n. J. Comb. Theory, Ser. A, 31(2):199–214, 1981.
[24] Michael L. Fredman and Michael E. Saks. The cell probe complexity of dynamic data structures. In ACM Symp. on the Theory of Computing (STOC), pages 345–354, 1989.
[25] Anka Gajentaan and Mark H. Overmars. On a class of problems in computational geometry. Comput. Geom., 5:165–185, 1995.
[26] Anna G�l and Peter Bro Miltersen. The cell probe complexity of succinct data structures. Theoretical Computer Science, 379(3):405–417, 2007.
[27] M. R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, 1979.
[28] K. G�del. �ber formal unentscheidbare s�tze der Principia Mathematica und verwandter systeme I. Monatsh. Math. Phys., 38, 1931.
[29] Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge University Press, 2008.
[30] Raymond Greenlaw, H. James Hoover, and Walter Ruzzo. Limits to Parallel Computation: P-Completeness Theory. 02 2001.
[31] David Harvey and Joris van der Hoeven. Integer multiplication in time . Annals of Mathematics, 193(2):563 – 617, 2021.
[32] F. C. Hennie. One-tape, off-line turing machine computations. Information and Control, 8(6):553–578, 1965.
[33] Fred Hennie and Richard Stearns. Two-tape simulation of multitape turing machines. J. of the ACM, 13:533–546, October 1966.
[34] John E. Hopcroft, Wolfgang J. Paul, and Leslie G. Valiant. On time versus space. J. ACM, 24(2):332–337, 1977.
[35] Russell Impagliazzo and Ramamohan Paturi. The complexity of -sat. In IEEE Conf. on Computational Complexity (CCC), pages 237–, 1999.
[36] Russell Impagliazzo, Ramamohan Paturi, and Michael E. Saks. Size-depth tradeoffs for threshold circuits. SIAM J. Comput., 26(3):693–707, 1997.
[37] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have strongly exponential complexity? J. Computer & Systems Sciences, 63(4):512–530, Dec 2001.
[38] Russell Impagliazzo and Avi Wigderson. if requires exponential circuits: Derandomizing the XOR lemma. In 29th ACM Symp. on the Theory of Computing (STOC), pages 220–229. ACM, 1997.
[39] Richard M. Karp and Richard J. Lipton. Turing machines that take advice. L’Enseignement Math�matique. Revue Internationale. IIe S�rie, 28(3-4):191–209, 1982.
[40] Kojiro Kobayashi. On the structure of one-tape nondeterministic turing machine time hierarchy. Theor. Comput. Sci., 40:175–193, 1985.
[41] Swastik Kopparty and Srikanth Srinivasan. Certifying polynomials for AC\({}^{\mbox {0}}\)[\(\oplus \)] circuits, with applications to lower bounds and circuit compression. Theory of Computing, 14(1):1–24, 2018.
[42] Kasper Green Larsen, Omri Weinstein, and Huacheng Yu. Crossing the logarithmic barrier for dynamic boolean data structure lower bounds. SIAM J. Comput., 49(5), 2020.
[43] Leonid A. Levin. Universal sequential search problems. Problemy Peredachi Informatsii, 9(3):115–116, 1973.
[44] Carsten Lund, Lance Fortnow, Howard Karloff, and Noam Nisan. Algebraic methods for interactive proof systems. J. of the ACM, 39(4):859–868, October 1992.
[45] O. B. Lupanov. A method of circuit synthesis. Izv. VUZ Radiofiz., 1:120–140, 1958.
[46] Wolfgang Maass and Amir Schorr. Speed-up of Turing machines with one work tape and a two-way input tape. SIAM J. on Computing, 16(1):195–202, 1987.
[47] David A. Mix Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in NC1. J. of Computer and System Sciences, 38(1):150–164, 1989.
[48] Joseph Naor and Moni Naor. Small-bias probability spaces: efficient constructions and applications. SIAM J. on Computing, 22(4):838–856, 1993.
[49] E. I. Nechiporuk. A boolean function. Soviet Mathematics-Doklady, 169(4):765–766, 1966.
[50] Valery A. Nepomnjaščiĭ. Rudimentary predicates and Turing calculations. Soviet Mathematics-Doklady, 11(6):1462–1465, 1970.
[51] NEU. From RAM to SAT. Available at http://www.ccs.neu.edu/home/viola/, 2012.
[52] Christos H. Papadimitriou and Mihalis Yannakakis. Optimization, approximation, and complexity classes. J. Comput. Syst. Sci., 43(3):425–440, 1991.
[53] Mihai Pǎtraşcu. Succincter. In 49th IEEE Symp. on Foundations of Computer Science (FOCS). IEEE, 2008.
[54] Wolfgang J. Paul, Nicholas Pippenger, Endre Szemer�di, and William T. Trotter. On determinism versus non-determinism and related problems (preliminary version). In IEEE Symp. on Foundations of Computer Science (FOCS), pages 429–438, 1983.
[55] Nicholas Pippenger and Michael J. Fischer. Relations among complexity measures. J. of the ACM, 26(2):361–381, 1979.
[56] Alexander Razborov. Lower bounds on the dimension of schemes of bounded depth in a complete basis containing the logical addition function. Akademiya Nauk SSSR. Matematicheskie Zametki, 41(4):598–607, 1987. English translation in Mathematical Notes of the Academy of Sci. of the USSR, 41(4):333-338, 1987.
[57] Omer Reingold. Undirected connectivity in log-space. J. of the ACM, 55(4), 2008.
[58] J. M. Robson. N by N checkers is exptime complete. SIAM J. Comput., 13(2):252–267, 1984.
[59] Rahul Santhanam. On separators, segregators and time versus space. In IEEE Conf. on Computational Complexity (CCC), pages 286–294, 2001.
[60] Walter J. Savitch. Relationships between nondeterministic and deterministic tape complexities. Journal of Computer and System Sciences, 4(2):177–192, 1970.
[61] Arnold Sch�nhage. Storage modification machines. SIAM J. Comput., 9(3):490–508, 1980.
[62] Adi Shamir. IP = PSPACE. J. of the ACM, 39(4):869–877, October 1992.
[63] Claude E. Shannon. The synthesis of two-terminal switching circuits. Bell System Tech. J., 28:59–98, 1949.
[64] Victor Shoup. New algorithms for finding irreducible polynomials over finite fields. Mathematics of Computation, 54(189):435–447, 1990.
[65] Alan Siegel. On universal classes of extremely random constant-time hash functions. SIAM J. on Computing, 33(3):505–543, 2004.
[66] Michael Sipser. A complexity theoretic approach to randomness. In ACM Symp. on the Theory of Computing (STOC), pages 330–335, 1983.
[67] Roman Smolensky. Algebraic methods in the theory of lower bounds for Boolean circuit complexity. In 19th ACM Symp. on the Theory of Computing (STOC), pages 77–82. ACM, 1987.
[68] Larry Stockmeyer and Albert R. Meyer. Cosmological lower bound on the circuit complexity of a small problem in logic. J. ACM, 49(6):753–784, 2002.
[69] Seinosuke Toda. PP is as hard as the polynomial-time hierarchy. SIAM J. on Computing, 20(5):865–877, 1991.
[70] Alan M. Turing. On computable numbers, with an application to the entscheidungsproblem. Proc. London Math. Soc., s2-42(1):230–265, 1937.
[71] Leslie G. Valiant. Graph-theoretic arguments in low-level complexity. In 6th Symposium on Mathematical Foundations of Computer Science, volume 53 of Lecture Notes in Computer Science, pages 162–176. Springer, 1977.
[72] Leslie G. Valiant and Vijay V. Vazirani. NP is as easy as detecting unique solutions. Theor. Comput. Sci., 47(3):85–93, 1986.
[73] Dieter van Melkebeek. A survey of lower bounds for satisfiability and related problems. Foundations and Trends in Theoretical Computer Science, 2(3):197–303, 2006.
[74] Dieter van Melkebeek and Ran Raz. A time lower bound for satisfiability. Theor. Comput. Sci., 348(2-3):311–320, 2005.
[75] N. V. Vinodchandran. A note on the circuit complexity of PP. Theor. Comput. Sci., 347(1-2):415–418, 2005.
[76] Emanuele Viola. Gems of theoretical computer science. Lecture notes of the class taught at Northeastern University. Available at http://www.ccs.neu.edu/home/viola/classes/gems-08/index.html, 2009.
[77] Emanuele Viola. On approximate majority and probabilistic time. Computational Complexity, 18(3):337–375, 2009.
[78] Emanuele Viola. On the power of small-depth computation. Foundations and Trends in Theoretical Computer Science, 5(1):1–72, 2009.
[79] Emanuele Viola. Reducing 3XOR to listing triangles, an exposition. Available at http://www.ccs.neu.edu/home/viola/, 2011.
[80] Emanuele Viola. Bit-probe lower bounds for succinct data structures. SIAM J. on Computing, 41(6):1593–1604, 2012.
[81] Emanuele Viola. Lower bounds for data structures with space close to maximum imply circuit lower bounds. Theory of Computing, 15:1–9, 2019. Available at http://www.ccs.neu.edu/home/viola/.
[82] Emanuele Viola. Pseudorandom bits and lower bounds for randomized turing machines. Theory of Computing, 18(10):1–12, 2022.