Conference Proceedings

Abstract: We prove an $\Omega(\log n \log \log n)$ lower bound for the span of implementing the $n$ input, $\log n$depth FFT circuit (also known as butterfly network) in the nonatomic binary forkjoin model. In this model, memoryaccess synchronizations occur only through fork operations, which spawn two child threads, and join operations, which resume a parent thread when its child threads terminate. Our bound is asymptotically tight for the nonatomic binary forkjoin model, which has been of interest of late, due to its conceptual elegance and ability to capture asynchrony. Our bound implies superlogarithmic lower bound in the nonatomic binary forkjoin model for implementing the butterfly merging networks used, e.g., in Batcher’s bitonic and oddeven mergesort networks. This lower bound also implies an asymptotic separation result for the atomic and nonatomic versions of the forkjoin model, since, as we point out, FFT circuits can be implemented in the atomic binary forkjoin model with span equal to their circuit depth.
PDF DOI 
Abstract: When lexicographically sorting strings, it is not always necessary to inspect all symbols. For example, the lexicographical rank of
PDF ARXIV DOIeuropar
amongst the stringseureka
,eurasia
, and excells only depends on its so called relevant prefixeuro
. The distinguishing prefix size $D$ of a set of strings is the number of symbols that actually need to be inspected to establish the lexicographical ordering of all strings. Efficient string sorters should be $D$aware, i.e. their complexity should depend on $D$ rather than on the total number $N$ of all symbols in all strings. While there are many $D$aware sorters in the sequential setting, there appear to be no such results in the PRAM model. We propose a framework yielding a $D$aware modification of any existing PRAM string sorter. The derived algorithms are workoptimal with respect to their original counterpart: If the original algorithm requires $O(w(N))$ work, the derived one requires $O(w(D))$ work. The execution time increases only by a small factor that is logarithmic in the length of the longest relevant prefix. Our framework universally works for deterministic and randomized algorithms in all variations of the PRAM model, such that future improvements in ($D$unaware) parallel string sorting will directly result in improvements in $D$aware parallel string sorting. 
Abstract:
Currently, the fastest comparisonbased sorting implementation on GPUs is implemented using a parallel pairwise merge sort algorithm (Thrust library). To achieve fast runtimes, the number of threads $t$ to sort the input of $N$ elements is finetuned experimentally for each generation of Nvidia GPUs in such a way that the number of elements $E = N/t$ that each thread accesses in each merging round results in a small (empirically measured) number of shared memory contentions, known as bank conflicts, while balancing the number of global memory accesses and latencyhiding through thread oversubscription/occupancy.
In this paper, we show that for every choice of $E < w$, such that $E$ and $w$ are coprime, there exists an input permutation on which every warp of $w$ threads of the Thrust merge sort is effectively reduced to using at most $\lceil w/E \rceil$ threads due to sequentialization of shared memory accesses due to bank conflicts. Note that this matches the trivial worstcase bound on the loss of parallelism due to memory contentions for any warp accessing $wE$ contiguous shared memory locations.
Our proof is constructive, i.e., we are able to automatically construct such permutation for every value of $E$. We also show in practice that such constructed inputs result in up to 50% slowdown, compared to the performance on random inputs, on modern GPU hardware.
PDF DOI 
Abstract: We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparisonbased algorithms as the maximal number of comparisons any individual element takes part in. We give a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straightforward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.
PDF ARXIV DOI 
Abstract:
We study the relationship between memory accesses, bank conflicts, thread multiplicity (also known as oversubscription) and instructionlevel parallelism in comparisonbased sorting algorithms for Graphics Processing Units (GPUs). We experimentally validate a proposed formula that relates these parameters with asymptotic analysis of the number of memory accesses by an algorithm. Using this formula we analyze and compare several GPU sorting algorithms, identifying key performance bottlenecks in each one of them. Based on this analysis we propose a GPUefficient multiway mergesort algorithm, GPUMMS, which minimizes or eliminates these bottlenecks and balances various limiting factors for specific hardware.
We realize an implementation of GPUMMS and compare it to sorting algorithm implementations in stateoftheart GPU libraries on three GPU architectures. Despite these library implementations being highly optimized, we find that GPUMMS outperforms them by an average of $21\%$ for random integer inputs and $14\%$ for random keyvalue pairs.
PDF DOI Related Software 
Abstract: We present parallel algorithms to efficiently permute a sorted array into the levelorder binary search tree (BST), levelorder Btree (Btree), and van Emde Boas (vEB) layouts inplace. We analytically determine the complexity of our algorithms and empirically measure their performance. Results indicate that on both CPU and GPU architectures Btree layouts provide the best query performance. However, when considering the total time to permute the data and to perform a series of search queries, our vEB permutation provides the best performance on the CPU. We show that, given an input of $N=500\mathrm{M}$ $64$bit integers, the benefits of query performance (compared to binary search) outweigh the cost of inplace permutation using our algorithms when performing at least $5\mathrm{M}$ queries ($1\%$ of $N$) and 27M queries ($6\%$ of $N$), on our CPU and GPU platforms, respectively.
PDF DOI 
Abstract: Visibility graph reconstruction, which asks us to construct a polygon that has a given visibility graph, is a fundamental problem with unknown complexity (although visibility graph recognition is known to be in PSPACE). We show that two classes of uniform step length polygons can be reconstructed efficiently by finding and removing rectangles formed between consecutive convex boundary vertices called tabs. In particular, we give an $O(n^2m)$time reconstruction algorithm for orthogonally convex polygons, where $n$ and $m$ are the number of vertices and edges in the visibility graph, respectively. We further show that reconstructing a monotone chain of staircases (a histogram) is fixedparameter tractable, when parameterized on the number of tabs, and polynomially solvable in time $O(n^2m)$ under reasonable alignment restrictions.
PDF ARXIV DOI 
Abstract: Motivated by the asymmetric read and write costs of emerging nonvolatile memory technologies, we study lower bounds for the problems of sorting, permuting and multiplying a sparse matrix by a dense vector in the asymmetric external memory model (AEM). Given an AEM with internal (symmetric) memory of size $M$, transfers between symmetric and asymmetric memory in blocks of size $B$ and the ratio $\omega$ between write and read costs, we show $\Omega(\min\{N, \frac{\omega N}{B}\log_{\frac{\omega M}{B}} \frac{N}{B}\})$ lower bound for the cost of permuting $N$ input elements. This lower bound also applies to the problem of sorting $N$ elements. This proves that the existing sorting algorithms in the AEM model are optimal to within a constant factor for reasonable ranges of parameters $N$, $M$, $B$, and $\omega$. We also show a lower bound of $\Omega\left(\min\left\{H,\frac{\omega H}{B} \log_{\frac{\omega M}{B}} \frac{N}{\max\{\delta ,M\}} \right\} \right)$ for the cost of multiplying an $N \times N$ matrix with at most $H=\delta N$ nonempty entries by a vector with $N$ elements.
PDF DOI 
Abstract:
Let $T$ be a terrain, and let $P$ be a set of points (locations) on its surface. An important problem in Geographic Information Science (GIS) is computing the visibility index of a point $p$ on $P$, that is, the number of points in $P$ that are visible from $p$. The total visibilityindex problem asks for computing the visibility index of every point in $P$. Most applications of this problem involve 2dimensional terrains represented by a grid of $n \times n$ square cells, where each cell is associated with an elevation value, and $P$ consists of the centerpoints of these cells. Current approaches for computing the total visibilityindex on such a terrain take at least quadratic time with respect to the number of the terrain cells. While finding a subquadratic solution to this 2D total visibilityindex problem is an open problem, surprisingly, no subquadratic solution has been proposed for the onedimensional (1D) version of the problem; in the 1D problem, the terrain is an $x$monotone polyline, and $P$ is the set of the polyline vertices.
We present an $O(n \log^2 n)$ algorithm that solves the 1D total visibilityindex problem in the RAM model. Our algorithm is based on a geometric dualization technique, which reduces the problem into a set of instances of the redblue line segment intersection counting problem. We also present a parallel version of this algorithm, which requires $O(\log^2 n)$ time and $O(n \log^2 n)$ work in the CREW PRAM model. We implement a naive $O(n^2)$ approach and three variations of our algorithm: one employing an existing redblue line segment intersection algorithm and two new approaches that perform the intersection counting by leveraging features specific to our problem. We present experimental results for both serial and parallel implementations on large synthetic and realworld datasets, using two distinct hardware platforms. Results show that all variants of our algorithm outperform the naive approach by several orders of magnitude on large datasets. Furthermore, we show that our new intersection counting implementations achieve more than 8 times speedup over the existing redblue line segment intersection algorithm. Our parallel implementation is able to process a terrain of $2^{24}$ vertices in under 1 minute using 16 cores, achieving more than 7 times speedup over serial execution.
PDF DOI Related Software 
Abstract: Manycore Graphics Processing Units (GPUs) are being used for generalpurpose computing. However, due to architectural features, for many problems it is challenging to design parallel algorithms that exploit the full compute power of GPUs. Among these features is the memory design. Although the issue of coalesced global memory access has been documented and studied extensively, another important architectural feature is the organization of shared memory into banks. The study of how bank conflicts impact algorithm performance has only recently begun to receive attention. In this work we study the predecessor search algorithm and the effects of bank conflicts on its execution time. Via complexity analysis we show that bank conflicts cause significant loss in parallelism for a naive algorithm. We then propose two improved algorithms: one that eliminates bank conflicts altogether but that uses a work inefficient linear search, and one that is workoptimal but that experiences a limited number of bank conflicts. We develop GPU implementations of these algorithms and present experimental results obtained on realworld hardware. These results validate our theoretical analysis of the naive algorithm and allow us to assess the performance of our algorithms in practice. Although both our improved algorithms outperform the naive algorithm, our main experimental finding is that our conflictlimited algorithm provides a larger performance gain.
PDF DOI 
Abstract:
In this paper, we look at the complexity of designing algorithms without any bank conflicts in the shared memory of Graphical Processing Units (GPUs). Given input of size $n$, $w$ processors and $w$ memory banks, we study three fundamental problems: sorting, permuting and $w$way partitioning (defined as sorting an input containing exactly $n/w$ copies of every integer in $[w]$).
We solve sorting in optimal $O(\frac{n}{w} \log n)$ time. When $n \ge w^2$, we solve the partitioning problem optimally in $O(n/w)$ time. We also present a general solution for the partitioning problem which takes $O(\frac{n}{w} \log^3_{n/w} w)$ time. Finally, we solve the permutation problem using a randomized algorithm in $O(\frac{n}{w} \log\log\log_{n/w} n)$ time. Our results show evidence that when working with banked memory architectures, there is a separation between these problems and the permutation and partitioning problems are not as easy as simple parallel scanning.
PDF ARXIV DOI 
Abstract:
We study the problem of list ranking in the parallel external memory (PEM) model. We observe an interesting dual nature for the hardness of the problem due to limited information exchange among the processors about the structure of the list, on the one hand, and its close relationship to the problem of permuting data, which is known to be hard for the external memory models, on the other hand.
By carefully defining the power of the computational model, we prove a permuting lower bound in the PEM model. Furthermore, we present a stronger $\Omega(\log^2 N)$ lower bound for a special variant of the problem and for a specific range of the model parameters, which takes us a step closer toward proving a nontrivial lower bound for the list ranking problem in the bulksynchronous parallel (BSP) and MapReduce models. Finally, we also present an algorithm that is tight for a larger range of parameters of the model than in prior work.
PDF ARXIV DOI 
Abstract:
In this paper we study the offline (batched) range minima query (RMQ) problem in the external memory (EM) and cacheoblivious (CO) models. In the static RMQ problem, given an array $A$, a query rmq$_A(i,j)$ returns the smallest element in the range $A[i,j]$.
If $B$ is the size of the block and $m$ is the number of blocks that fit in the internal memory in the EM and CO models, we show that $Q$ range minima queries on an array of size $N$ can be answered in $O(\frac{N}{B} + \frac{Q}{B}\log_{m} \frac{Q}{B}) = O(\mathrm{scan}(N) + \mathrm{sort}(Q))$ I/Os in the CO model and slightly better $O(\mathrm{scan}(N) + \frac{Q}{B} \log_m \min\{\frac{Q}{B}, \frac{N}{B}\})$ I/Os in the EM model and linear space in both models. Our cacheoblivious result is new and our external memory result is an improvement of the previously known bound. We also show that the EM bound is tight by proving a matching lower bound. Our lower bound holds even if the queries are presorted in any predefined order.
In the batched dynamic RMQ problem, the queries must be answered in the presence of the updates (insertions/deletions) to the array. We show that in the EM model we can solve this problem in $O(\mathrm{sort}(N) + \mathrm{sort}(Q)\log_m \frac{N}{B})$ I/Os, again improving the best previously known bound.
PDF DOI 
Abstract: In this paper, we perform an empirical evaluation of the Parallel External Memory (PEM) model in the context of geometric problems. In particular, we implement the parallel distribution sweeping framework of Ajwani, Sitchinava and Zeh to solve batched 1dimensional stabbing max problem. While modern processors consist of sophisticated memory systems (multiple levels of caches, set associativity, TLB, prefetching), we empirically show that algorithms designed in simple models, that focus on minimizing the I/O transfers between shared memory and single level cache, can lead to efficient software on current multicore architectures. Our implementation exhibits significantly fewer accesses to slow DRAM and, therefore, outperforms traditional approaches based on plane sweep and twoway divide and conquer.
PDF ARXIV DOI 
Abstract: We study a simple parallel algorithm for computing matchings in a graph. A variant for unweighted graphs finds a maximal matching using linear expected work and $O(\log^2 n)$ expected running time in the CREW PRAM model. Similar results also apply to External Memory, MapReduce and distributed memory models. In the maximum weight case the algorithm guarantees a 1/2approximation. Although the parallel execution time is linear for worst case weights, an experimental evaluation indicates good scalabilty on distributed memory machines and on GPUs. Furthermore, the solution quality is very good in practice.
PDF ARXIV DOI 
Abstract: We study the onedimensional range minimum query (RMQ) problem in the external memory model. We provide the first spaceoptimal solution to the batched static version of the problem. On an instance with $N$ elements and $Q$ queries, our solution takes $\Theta(\mathrm{sort}(N+Q)) = \Theta(\frac{N+Q}{B}\log_{M/B} \frac{N+Q}{B})$ I/O complexity and $O(N+Q)$ space, where $M$ is the size of the main memory and $B$ is the block size. This is a factor of $O(\log_{M/B} N)$ improvement in space complexity over the previous solutions. We also show that an instance of the batched dynamic RMQ problem with $N$ updates and $Q$ queries can be solved in $O(\frac{N+Q}{B} \log^2_{M/B} \frac{N+Q}{B})$ I/O complexity and $O(N+Q)$ space.
PDF DOI 
Abstract:
We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a nontrivial extension of Arge’s sequential buffer tree to a privatecache multiprocessor environment and reduces the number of I/O operations by the number of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism.
The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of $N$ insertions, deletions, membership queries, and range queries in the optimal $O(\mathrm{sort}_P(N) + K/PB)$ parallel I/O complexity, where $K$ is the size of the output reported in the process and $\mathrm{sort}_P(N)$ is the parallel I/O complexity of sorting $N$ elements using $P$ processors.
PDF DOI 
Abstract: We study the MapReduce framework from an algorithmic standpoint, providing a generalization of the previous algorithmic models for MapReduce. We present optimal solutions for the fundamental problems of allprefixsums, sorting and multisearching. Additionally, we design optimal simulations of the the wellestablished PRAM and BSP models in MapReduce, immediately resulting in optimal solutions to the problems of computing fixeddimensional linear programming and 2D and 3D convex hulls.
PDF ARXIV DOI 
Abstract: The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for privatecache multicore architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/Oefficient distribution sweeping framework was introduced recently, and a number of algorithms for problems on axisaligned objects were obtained using this framework. The obtained algorithms were efficient but not optimal. In this paper, we improve the framework to obtain algorithms with the optimal I/O complexity of $O(\mathrm{sort}_P(N) + K/PB)$ for a number of problems on axis aligned objects; $P$ denotes the number of cores/processors, $B$ denotes the number of elements that fit in a cache line, $N$ and $K$ denote the sizes of the input and output, respectively, and $\mathrm{sort}_P(N)$ denotes the I/O complexity of sorting $N$ items using $P$ processors in the PEM model. To obtain the above improvement, we present a new onedimensional batched range counting algorithm on a sorted list of ranges and points that achieves an I/O complexity of $O((N + K)/PB)$, where $K$ is the sum of the counts of all the ranges. The key to achieving efficient load balancing among the processors in this algorithm is a new method to count the output without enumerating it, which might be of independent interest.
PDF DOI 
Abstract: We study techniques for obtaining efficient algorithms for geometric problems on privatecache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1D range counting, weighted 2D dominance counting, and for computing 3D maxima, 2D lower envelopes, and 2D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problemsorthogonal line segment intersection reporting, batched range reporting, and related problemsmore effort is required. What distinguishes these problems from the ones in the previous group is the variable output size, which requires I/Oefficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution sweeping technique inspired by its sequential counterpart.
PDF DOI 
Abstract: In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the privatecache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest common ancestors, tree contraction and expression tree evaluation. We also study the problems of computing the connected and biconnected components of a graph, minimum spanning tree of a connected graph and ear decomposition of a biconnected graph. All our solutions on a $P$processor PEM model provide an optimal speedup of $\Theta(P)$ in parallel I/O complexity and parallel computation time, compared to the singleprocessor external memory counterparts.
PDF DOI 
Abstract: In this paper, we study parallel algorithms for privatecache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on privatecache CMPs, we show that we can design efficient algorithms that need no additional assumptions about the way cores are interconnected, for we assume that all interprocessor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks. In addition, we study sorting lower bounds in a computational model, which we call the parallel externalmemory (PEM) model, that formalizes the essential properties of our algorithms for privatecache CMPs.
PDF DOI 
Abstract: We consider the problem of placing a small number of angle guards inside a simple polygon $P$ so as to provide efficient proofs that any given point is inside $P$. Each angle guard views an infinite wedge of the plane, and a point can prove membership in $P$ if it is inside the wedges for a set of guards whose common intersection contains no points outside the polygon. This model leads to a broad class of new art gallery type problems, which we call “sculpture garden” problems and for which we provide upper and lower bounds. In particular, we show there is a polygon $P$ such that a “natural” angleguard vertex placement cannot fully distinguish between points on the inside and outside of $P$ (even if we place a guard at every vertex of $P$), which implies that Steinerpoint guards are sometimes necessary. More generally, we show that, for any polygon $P$, there is a set of $n+2(h1)$ angle guards that solve the sculpture garden problem for $P$, where h is the number of holes in $P$ (so a simple polygon can be defined with $n2$ guards). In addition, we show that, for any orthogonal polygon $P$, the sculpture garden problem can be solved using $\frac{n}{2}$ angle guards. We also give an example of a class of simple (nongeneralposition) polygons that have sculpture garden solutions using $O(\sqrt{n})$ guards, and we show this bound is optimal to within a constant factor. Finally, while optimizing the number of guards solving a sculpture garden problem for a particular $P$ is of unknown complexity, we show how to find in polynomial time a guard placement whose size is within a factor of $2$ of the optimal number for any particular polygon.
PDF DOI 
Abstract: This paper extends the reconfigurable shared scanin architecture (RSSA) to provide additional ability to change values on the scan configuration signals (scan enable signals) during the scan operation on a pershift basis. We show that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same. In addition a simpler analysis can be used to construct the scan chains. This is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern. Results are presented on some ISCAS as well as industrial circuits.
PDF DOI 
Abstract: In this paper, an efficient technique for test data volume reduction based on the shared scanin (Illinois Scan) architecture and the scan chain reconfiguration (Dynamic Scan) architecture is defined. The composite architecture is created with analysis that relies on the compatibility relation of scan chains. Topological analysis and compatibility analysis are used to maximize gains in test data volume and test application time. The goal of the proposed synthesis procedure is to test all detectable faults in broadcast test mode using minimum scanchain configurations. As a result, more aggressive sharing of scan inputs can be applied for test data volume and test application time reduction. The experimental results demonstrate the efficiency of the proposed architecture for realindustrial circuits.
PDF DOI
Journal Articles

Abstract: Visibility graph reconstruction, which asks us to construct a polygon that has a given visibility graph, is a fundamental problem with unknown complexity (although visibility graph recognition is known to be in PSPACE). As far as we are aware, the only class of orthogonal polygons that are known to have efficient reconstruction algorithms is the class of orthogonal convex fans (staircase polygons) with uniform step lengths. We show that two classes of uniform step length polygons can be reconstructed efficiently by finding and removing rectangles formed between consecutive convex boundary vertices called tabs. In particular, we give an $O(n^2m)$time reconstruction algorithm for orthogonally convex polygons, where $n$ and $m$ are the number of vertices and edges in the visibility graph, respectively. We further show that reconstructing a monotone chain of staircases (a histogram) is fixedparameter tractable, when parameterized on the number of tabs, and polynomially solvable in time $O(n^2m)$ under alignment restrictions. As a consequence of our reconstruction techniques, we also get recognition algorithms for visibility graphs of these classes of polygons with the same running times.
PDF DOI 
Abstract: Two factors primarily drive the soaring cost of semiconductor test: the number of test patterns applied to each chip and the time it takes to run each pattern. Typical semiconductor testing for each chip involves a set of 1,000 to 5,000 test patterns. These tests are applied through scan chains that operate at about 25 MHz. Depending on the size of the scan chains on the chip, a set of test patterns can take a few seconds to execute per chip. It's easy to see that even a small decrease in either the number of patterns or the time to execute them can quickly add up to big savings across millions of fabricated chips. This potential savings forms the basis for dynamic scan, a new approach to the wellestablished scan test methodology. The authors initial studies indicate that dynamic scan could easily reduce the time spent applying test patterns by 40 percent. A more theoretical analysis shows a potential savings of as much as 80 percent.
PDF DOI
Referreed Workshops (without formally published proceedings)

N. Sitchinava, D. Strash: "Reconstructing a unitlength orthogonally convex polygon from its visibility graph". European Workshop on Computational Geometry (EuroCG '16). 2016.

P. Afshani, N. Sitchinava: "I/Oefficient range minima queries". 6th Workshop on Massive Data Algorithmics (MASSIVE '14). 2014.

N. Sitchinava, V. Weichert: "Provably efficient GPU algorithms". 5th Workshop on Massive Data Algorithmics (MASSIVE '13). 2013.

L. Arge, J. Fischer, P. Sanders, N. Sitchinava: "On (dynamic) range minimum queries in external memory". 5th Workshop on Massive Data Algorithmics (MASSIVE '13). 2013.

D. Ajwani, N. Sitchinava, N. Zeh: "I/Ooptimal distribution sweeping on privatecache chip multiprocessors". 3rd Workshop on Massive Data Algorithmics (MASSIVE '11). 2011.

D. Ajwani, N. Sitchinava, N. Zeh: "Geometric algorithms for privatecache chip multiprocessors". 2nd Workshop on Massive Data Algorithmics (MASSIVE '10). 2010.

L. Arge, M.T. Goodrich, N. Sitchinava: "Parallel external memory model". Workshop on Theory and ManyCores (T&MC). 2009.

* N. Sitchinava, S. Samaranayake, R. Kapur, F. Neuveux, E. Gizdarski, T.W. Williams: "Dynamically reconfigurable shared scanin architecture". IEEE International Test Synthesis Workshop (ITSW '04). 2004.

* N. Sitchinava, S. Samaranayake, R. Kapur, F. Neuveux, E. Gizdarski, T.W. Williams, D. Spielman: "A segment identification algorithms for a dynamic scan architecture". IEEE International Test Synthesis Workshop (ITSW '03). 2003.

* N. Sitchinava, S. Samaranayake, R. Kapur, M. Amin, T.W. Williams: "DFT  ATE solution to lower the cost of test". IEEE Workshop on Test Resource Partitioning. 2001.