From 898cb000829b2e7bee2245f9f2210664943ad68a Mon Sep 17 00:00:00 2001
From: Luke Naylor <l.naylor@sms.ed.ac.uk>
Date: Mon, 1 Apr 2024 15:39:23 +0100
Subject: [PATCH] Start Benchmark section

---
 content.tex | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/content.tex b/content.tex
index 5d2c559..b74c618 100644
--- a/content.tex
+++ b/content.tex
@@ -1888,7 +1888,9 @@ above.
 The way it works, is by yielding solutions to the problem
 $u=(r,c\ell,\frac{e}{2}\ell^2)$ as follows.
 
-\subsection{Iterating Over Possible
+\subsection{Algorithm}
+
+\subsubsection{Iterating Over Possible
 \texorpdfstring{$q=\chern^{\beta_{-}}(u)$}{q}}
 
 Given a Chern character $v$, the domain of the problem are first verified: that
@@ -1921,7 +1923,7 @@ $\chern_1^{\beta_{-}}(u)=q$ for one of the $q$ considered is equivalent to
 satisfying condition \ref{item:chern1bound:lem:num_test_prob2}
 in corollary \ref{cor:num_test_prob2}.
 
-\subsection{Iterating Over Possible
+\subsubsection{Iterating Over Possible
 \texorpdfstring{$r=\chern_0(u)$}{r}
 for Fixed
 \texorpdfstring{$q=\chern^{\beta_{-}}(u)$}{q}
@@ -1964,7 +1966,7 @@ Iterate over such $r$ so that we are guarenteed to satisfy conditions
 in corollary
 \ref{cor:num_test_prob2}, and have a chance at satisfying the rest.
 
-\subsection{Iterating Over Possible
+\subsubsection{Iterating Over Possible
 \texorpdfstring{$d=\chern_2(u)$}{d}
 for Fixed
 \texorpdfstring{$r=\chern_0(u)$}{r}
@@ -1991,3 +1993,27 @@ just pick the integers $e$ that give $d$ values within the bounds.
 Thus, through this process yielding all solutions $u=(r,c\ell,\frac{e}{2}\ell^2)$
 to the problem for this choice of $v$.
 
+\subsection{Benchmarking Different Bounds}
+
+The bounds of the ranks of solutions to problem
+\ref{problem:problem-statement-2}
+given by theorems
+\ref{thm:loose-bound-on-r}
+\ref{thm:rmax_with_uniform_eps}
+\ref{thm:rmax_with_eps1}, have been shown in passing to be tighter than the
+previous one.
+However, in principle, it could be possible that this does not translate to an
+decrease in computational time to find the solutions to the problem.
+This could be due to a range of potential reasons:
+\begin{itemize}
+	\item Unexpected optimisations from the compiler for a certain form of the
+		program.
+	\item Increased complexity to computing the tighter bounds.
+	\item Modern CPU architecture such as branch predictors
+		\cite{BranchPredictor2024} may offset the overhead of considering ranks that
+		turn out to be too large to have any solutions.
+\end{itemize}
+
+
+However these don't end up being significant overheads when using the ``better''
+theorems, as verified here.
-- 
GitLab