Data Seminar
Generic orbit recovery from invariants of very low degree
Motivated by the multi-reference alignment (MRA) problem and questions in equivariant neural networks we study the problem of recovering the generic orbit in a representation of a finite group from invariant polynomials of degree at most 3. We prove that in many cases of interest these low degree invariants are sufficient to recover the orbit of a generic vector.
Semester seminar calendar: https://sites.google.com/view/mathdatamizzou/home
Density estimation for Gaussian mixture models
Density estimation for Gaussian mixture models is a classical problem in statistics that has applications in a variety of disciplines. Two solution techniques are commonly used for this problem: the method of moments and maximum likelihood estimation. This talk will discuss both methods by focusing on the underlying geometry of each problem.
Full seminar calendar: https://sites.google.com/view/mathdatamizzou/home
Homotopies for variational inference and approximate synthesis
For parameterized systems, one standard problem is to determine the set of parameters which "best" fits given data. Two examples of this will be summarized in this talk, both of which can be solved using homotopies. The first is variational inference in which one searches in a parameterized family of probability distributions for a probability distribution that best fits the given data. The second is synthesizing a linkage whose coupler curve best approximates the given data. This talk is joint work with Emma Cobian, Fang
A fourth moment theorem for estimating subgraph counts in large graphs
Given a large network one is often interested in efficiently estimating various local statistics. In this talk, we'll discuss the distribution of one possible estimator arising from counting monochromatic subgraphs in a random vertex colorings.
ReLU transformers and piecewise polynomials.
We highlight a perhaps important but hitherto unobserved insight: The attention module in a ReLU-transformer is a cubic spline. Viewed in this manner, this mysterious but critical component of a transformer becomes a natural development of an old notion deeply entrenched in classical approximation theory. Conversely, if we assume the Pierce--Birkhoff conjecture, then every spline is also an encoder.
Degree Bounds for Rational Invariants
Degree bounds have a long history in invariant theory. The Noether bound on the degrees of algebra generators for a ring of invariants is over a century old, and there is a vast literature sharpening and generalizing it. In the last two decades, there has also been an active program on degree bounds for invariants which are able to distinguish orbits as well as algebra generators can (known as separating invariants).
Reconstructing the Geometry of Random Geometric Graphs
Random geometric graphs are random graph models defined on metric spaces. Such a model is defined by first sampling points from a metric space and then connecting each pair of sampled points with probability that depends on their distance, independently among pairs.
Two uniqueness results for the method-of-moments in cryo-EM
This talk considers provable methods for cryo-electron microscopy, which is an increasingly popular imaging technique for reconstructing 3-D biological macromolecules from a collection of noisy and randomly oriented projection images, with applications in e.g., drug design. The talk will present two uniqueness guarantees for recovering these structures from the second moment of the projection images, as well as two associated numerical algorithms.
Quiver representations and the Paulsen Problem in Frame Theory
Parseval frames provide redundant encodings, with equal-norm Parseval frames offering optimal robustness against one erasure. However, constructing such frames can be challenging. The Paulsen Problem asks to determine how far an ε-nearly equal-norm Parseval frame is from the set of all equal-norm Parseval frames. In this talk, I will present an approach to the Paulsen’s problem based on quiver invariant theory
Is uniform expressivity for GNNs too restrictive?
Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020].
Pagination
- Page 1
- Next page