Polymath16, ninth thread: Searching for a 6-coloring

This is the ninth “research” thread of the Polymath16 project to make progress on the Hadwiger–Nelson problem, continuing this post. This project is a follow-up to Aubrey de Grey’s breakthrough result that the chromatic number of the plane is at least 5. Discussion of the project of a non-research nature should continue in the Polymath proposal page. We will summarize progress on the Polymath wiki page.

Here’s a brief summary of the progress made in the previous thread:

We can now 6-color [0,2]\times\mathbb{R}.

– We now have a laundry list of necessary conditions for a certain family of tile-based 6-colorings of the plane.

The next step is to search for a 6-coloring of the plane. The basic idea is to build up colored plane graphs and prune with our necessary conditions. We were hoping that existing code to generate plane graphs (namely, plantri) could be used as a blackbox for this search, but after further inspection, it seems that we’ll have to write this code ourselves. (When building up plane graphs, we will be adding vertices to the outer face, whereas plantri treats all faces equally.)

Advertisements

62 thoughts on “Polymath16, ninth thread: Searching for a 6-coloring

  1. Here is a spreadsheet of tiles for 6-colourings. Rules 1,2,4 and 7 have been used to remove tiles while for rule 3,5,6 and 8 are shown where failed. Different colour variants for each tile are given. If you don’t care about those just use variant 0.

    I will add more to this as we proceed.

    http://phillgibbs.uk/out1.csv

      1. That was intentional. I was just trying to make it easier to select a list of valid tiles without worrying about the colour variants. In fact for the next version I may just give the number of variants instead of listing them unless you can see a use for them.

      2. List them, but all in the same cell (like the v,v constraints) rather than with separate rows. Can you explain what the numbers in that colours column mean? Also it would be great to use a more immediately intelligible notation for the v,e constraints – maybe a list of v,1v2,v3 where v2v3 is the edge? Then that could also be just one column with >=0 entries per cell.

      3. For the colouring it is assumed that the central tile is coloured with colour 5 and the tile joining edge i starting at zero is coloured with colour i. The numbers then give the colours for the tiles joining at a vertex only, listed in clockwise order.

        I will implement your suggestions.

      4. Very nice! So I’m seeing a total of 354 good tiles, after deduplicating chiral pairs (i.e. deleting the ones labelled “L”). It will be very exciting to see how many of them can be eliminated in the next step, i.e. when the neighbouring tiles are required to be taken from the set rather than just being well-behaved in the vicinity of the central tile.

      5. I will start to look at that next, but will take it in small steps. There might be a lot to be learnt from joining tiles together two at a time to begin with. That was what I found when doing it by hand with the small set of tiles.

      6. Just as we found rules for validity of tiles we can think of rules that determine when two edges of two tiles can be joined. An edge is determined firstly by the valency of its two vertices. When two edges come together to join two tiles theses valencies must match in reverse. Also, if there is a vertex-edge constraint on one there cannot be one on the other.

        Since we are looking at all possible combinations of vertex-vertex constraints and are not assuming that the constraints are being used by the colouring we can assume that if a diagonal or edge is not a vertex-vertex constraint then its length is strictly less than one. This means a constrained edge must match with a constrained edge and an unconstrained edge must match an unconstrained edge. (Here I mean a constraint between the two vertices of the edge, not a constraint between the edge and another vertex)

        Apart from those matching conditions there are more complex ones that are required to avoid colouring inconsistencies, and possibly other geometric inconsistencies involving the angles at the vertices etc.

      7. I think colouring variants found for each tile can be efficiently compared for consistency whenever two (or more) tiles are brought together, so it is just any other geometric rules that would be useful to know.

  2. I did not understand the proof of the claim “we can now 6-color [0,2]xR.”
    You are probably right, I just do not know what the proof is. It was hyperlinked to some inadequate gibberish. It seemed like a brute-force pixel coloring ought to
    work to solve this to pretty good accuracy of the maximum strip-width…

    Here is another question: What is the largest radius of a 3 dimensional ball,
    such that its surface can be k-colored (for example k-6)? Use geodesic distance.
    E.g. if we use the (radial projection of) a regular dodecahedron, then if we scale so the diagonal of a face is 1-epsilon, we can 6-color the 12 faces. Each face same color
    as its antipode.

    P. Erdos, D. Hickerson, and J. Pach: A problem of Leo Moser about repeated distances on the sphere, Amer. Math. Monthly 96 (1989) 569-575.
    showed that on a sphere of circumference 4 one could place N points
    so their unit geodesic-distance graph had order N^(4/3) edges
    and indeed each vertex had valency of order N^(1/3).
    Despite the high valencies their graph in general actually is bipartite (2-colorable),
    although it is always possible (via a preliminary projective transformation, which if I am counting correctly has 5 controllable degrees of freedom…)
    to force up to 5 “extra edges” each causing 3-cycles.

    It seems to me that the sphere is interesting in the following sense.
    Some graphs, such as the Moser spindle, still work fine on
    any sphere of large enough radius. Call them “generically valid.”
    Other graphs, like perhaps de Grey’s, perhaps depend on special
    miracles only valid on the exactly-flat plane, and do not work if
    any amount of constant curvature, no matter how small, is added.
    Similarly the Erdos/Hickerson/Pach graphs are not genericlally valid; they only
    work for the very special sphere-circumference value 4.

    “Generically valid” graphs remain embeddable on the sphere no matter
    if the sphere’s radius is continuously varied (within some radius-interval
    of positive width).

    Which of the graphs you all have been assiduously producing, are generically
    valid?

    1. The tiling of the surface of a sphere has indeed been discussed in the literature – I mentioned it here:

      https://dustingmixon.wordpress.com/2018/06/16/polymath16-seventh-thread-upper-bounds/#comment-4800

      where I note that the sphere of circumference 4 is special in that regard too. A few months ago I thought I had found a gorgeous 36-tile 6-colour covering of a sphere of circumference just over 8 – it was sooo neat… but unfortunately one of the types of tile turned out to have a geodesic diameter of 1.1 or so. So it remains the case that the only known cases in which more than one tile is the same colour are the octahedron on a sphere of circumference exactly 4 and the dodecahedron of circumference in the obvious range.

      I agree, the generic-ness of a finite graph to non-Euclidean geometries seems interesting. I guess the hyperbolic case is just as interesting as the spherical case. I don’t know of any work in this area.

    2. Actually the strip was not meant to be a new record width. We already had a strip of width 3.7 in thread 7. This one was interesting because it was five tiles wide in the grid representation we were using, but its physical width is less than the four tile wide strip.

    3. The 12 vertices of the icosahedron on a sphere, or if
      we identify each point with its antipode so we are speaking of “elliptic”
      nonEuclidean geometry rather than “spherical” — form a set of
      6 “equiangular lines” thru the sphere-center.

      Therefore, the elliptic nonEuclidean plane, with unit distance
      being the length of an icosahedron edge, requires at least 6 colors.
      This is a nongeneric result of course since it depends on this very special distance.

      But for what it is worth, it breaks the 6 barrier.

      (There is a considerable literature on “equiangular lines” mainly in higher dimensions.)

  3. I have a very naive question regarding the $[0,2]x\mathbb{R}$ strip example,

    https://dustingmixon.wordpress.com/2018/06/24/polymath16-eighth-thread-more-upper-bounds/#comment-4937

    I have not followed completely the discussion neither have I collected all the relevant conditions needed, sorry for that.

    But I wonder and would like to ask simply out of curiosity,
    what happens if one numbers the (o)range, (v)iolet and (g)reen tiles of the strip example top-down
    $v1, o1, g, v2, o2,$ and replace them top to down by
    $v(v1), o(o1), g(v1), v(o1), o(v1), g(o1), v(v1), o(o1), g(v1), v(o1), o(v1), g(o1), …$

    where e.g. $g(v1)$ is a green colored translated (not rotated, not scaled ) version of $v1$.

    would this give a larger strip example?

    1. No – your v(v1) and v(o1) are less than 1 apart. Even to get a height of 5 tiles we need the central tile to be “fat” at both top and bottom.

  4. [starting a new reply so as to skip past the more recent topics]

    Actually I’m not very optimistic about the incremental approach here. Yes we will be able to eliminate lots of combinations of abutting tiles, but I’m pretty sure that the number of combinaion that we can’t eliminate will still be enough to be combinatorially prohibitive. I thus feel that we will need to go to SAT-solving on a large island.

    1. There are 2,788341 different ways we can ask if two tiles join. Each of these needs some computation to see if the edge characteristics map and if the result can be coloured. There may be something like 10,000 ways that work. As we attempt to build our islands we will have to ask these same questions many times so we can gain a speed-up factor by three of four orders of magnitude just by testing all possible matchings once and making a table. SAT solvers are clever but they probably don’t figure out that kind of optimisation.

      Once we know which tiles and edges can be joined the next step would be to look at which tile combinations can be placed round a vertex. There will be a lot of angle constraints that come into play when we do this so again a large factor could be gained by making a list of all possibilities. Hopefully the numbers will still be something that can be stored in a table. A SAT-solver would not automatically understand constraints posed by angles so this is something we will have to investigate.

      The third step will be to construct all 1-Uzbek islands. If you launched into a search for these without first caching the results of the first two steps I don’t think it would be possible. I hope this list of all possible 1-Uzbek islands will be small enough to fit in memory but subsequent steps will depend on numbers. If we can store all viable 1-Uzbeks then building larger islands becomes much more tractable.

      1. Yeah, your algorithm looks about as good as can be done. (You mean 2-Uzbek though – tiles surrounded entirely, i.e. including vertex-neighbours, by other tiles – remember that by my definition the ocean itself is the only 0-Uzbek region.) But I’m still not optimistic… there just seem likely to be too many ways to do the high-valency vertices.

        The angles aspect is a good point but I think it can also be SATed. We already have (rule 3) that at most one edge of a 3-valent vertex can be 1 (straight-line distance, of course, not length of the possibly curved edge). Analogously, at most valency-2 of a 4- or 5-valent vertex can be 1. Can more than that be said arising from angle issues?

      2. I have been running some preliminary matching and do indeed get very high numbers of matching tiles joined at edges, about 270k pairs, but I have not yet looked at colouring.

        My hope is that we are getting large numbers only because we have put a lot of information into a tile. If we ignored vertex-vertex constraints we would have smaller numbers of tiles. However, I hope that the extra information will narrow down the number of possibilities when we fit more of them together so that the numbers of k-Uzbek islands will be smaller for sufficiently large k. If someone used a simpler tile set and ran a SAT solver it could be better or worse depending on when the pay-off for more information comes in.

        Any tile vertex has a maximum and minimum angle as determined by its constraints. We can look at angles between lines drawn from vertex to vertex or angles at the vertex which will differ depending on the curvature. In both cases the total angles of these around a vertex has to be 360 degrees, so having a maximum and minimum for each tile vertex provides a consistency check. I think this will severely narrow down the possibilities for arranging tiles around a vertex.

        You say there could be too many ways for high valency, but with high valency it is hard to squeeze in the tiles. It could be the low valency tiles that are harder to deal with because they have less constraints. We will see. Intuitive expectations can be wrong.

        I think I am going to write some general algorithms for colouring combinations of tiles so I will be investing some time that will make the initial results take longer but progress will speed up after that.

  5. @Dustin: could you elaborate a little more on why plantri won’t work for us? I’m thinking that we don’t necessarily need to think in terms of building up graphs by sequentially adding vertices. instead maybe we could enumerate all essentially distinct graphs with N vertices (N around 30) that can be the dual of the island. Such graphs would have one special vertex (corresponding to the ocean) with arbitrary degree, together with the following other constraints:

    – all other vertices have degree between 2 and 5
    – there are no uncrossed cycles of order greater than 6
    – neighbouring vertices cannot both be of degree 2 (this can probably be strengthened)
    – there is at least one vertex at distance 4 from the ocean

    If that comes out to a manageable number of graphs then the only remaining step would be to see whether all vertices other than the ocean can be assigned compatibly to members of Philip’s list (noting that the degree of neighbours of the ocean needs to be treated specially, i.e. tiles with more vertices than the degree of those neighbours need to be allowed) , which a question that seems made for SAT. If on the other hand the number of graphs is too great, some Philip-constraints (or near-constraints) could be thrown in earlier, e.g. just ban degree-2 vertices entirely, or ban degree-5 vertices of uncrossed hexagons.

    How does that sound? Is plantri suitable for that?

    1. I expect this method of using plantri to fail because there are simply too many planar/plane graphs. https://oeis.org/A005470 and https://oeis.org/A003094 can give a general sense of how fast the number of planar graphs grows.

      But of course we have some restrictions on the graphs, like degrees of vertices. I don’t think they’ll help much, using eg https://oeis.org/A000168 as a guide.

      So I wanted to use plantri to “grow” graphs where each intermediate stage is known to be colorable according to our constraints. But as best as I can tell, the internals of plantri don’t allow for that in a way that works for us.

      (I could be wrong.)

      1. Yeah, some kind of growing seems to be needed.

        What do you think about initially using plantri just to enumerate Philip’s next generation, i.e. islands with one completely landlocked tile? Philip noted that he expects it to be tractable to combine such islands to create ones with a doubly-landlocked tile, etc.

    2. Actually on reading the plantri documentation I see there is a -P option that seems to do a lot of what’s needed, i.e. it provides precisely the special “outer face” you mentioned. Since the graphs being generated are triangulations, it probably makes sense to look not for duals of a tiling but instead for graphs that would correspond to a tiling with a vertex added for each tile that is joined to the tile’s vertices. Each vertex corresponding to a tile vertex would then have even degree between 6 and 12, and each vertex corresponding to a tile would have degree between 2 and 5. Since these degree sets are disjooint we can quickly say things like that all pairs of vertices of degree less than 6 must be an even distance apart, and that the degrees of the neighbours of a vertex with degree at least 6 must alternate between small and large. I guess other tricks will be needed in order to keep the computations manageable – maybe there will be mileage in identifying prohibited subgraphs of increasing size.

      1. I have not gone any further with this yet but usually I find it easier to write my onw code than try to massage some prewritten code into doing what I want. Growing general planar graphs is not a difficult programming task. The problem is to find a way of doing this that is going to be efficient enough to work.

        If you count our good tiles by numbers of sides and valency alone there are 78 which is already a high number to fix together. We can try to colour graphs made of these such that no tile is joined by an edge to another tile of the same colour and no vertex has two tiles of the same colour. However, there are many solutions to colour this way with six colours so more constraints are needed.

        If we add just the edge-vertex constraints then the number of good tiles jumps dramatically to 313. The grid based search could only construct solutions up to 6 by 10 but if we allow any planar graph it could take a lot of computation to reach that size and there may even be solutions.

        Adding in the vertex-vertex constraints gives 563 tiles, but imposes more geometric constraints. Will that be better?

        It may be useful to classify tiles in the dual graph first so I am going to think about that before starting the coding.

  6. I just went back to the question of iterating on Proposition 36, and I’m wondering if there is a problem with it in its current (un-iterated) form. The construction appears to allow BE to be arbitrarily small if d is in the vicinity of 1/sqrt 2, and yet BE is one of the distances required to be at least 1/2 in the proof. Am I messing up, or is there a variation on the construction for this problematic ranges of values of d?

    Presuming we can fix that: if I understand it, the minimum permissible value of d if we want to iterate infinitely is obtained when we take B,C,D to lie on a circle of radius d such that |BD| = d and |BC| and |CD| = 1. Thus d = (\sqrt 3 - 1)/\sqrt 2, so a hair more than 2/\sqrt{15} because we can’t still allow BD to go as low as exactly 1/2 – right? At that limit we have |AE| = |DF| = d, |BE| = d \sqrt 3 and |BF| = 2d, so everything else is good. I don’t however trust myself to get the last bit right and come up with the correct multiple of delta – I will leave that to Domotor or Terry.

    Finally: are there particular values of d (in particular, this minimum value) for which this proof gives a sharper upper bound because additional distances are exactly d?

    1. Hang on… there is a step in the proof that says that AE is monochromatic with probability at most 3 delta. But |AE| = d (whatever the value of d), so AE is monochromatic with probability exactly 1/2 – delta, meaning delta is at least 1/8. Surely I am messing up – where?

      1. Ah, apologies, of course it’s only “outside of epsilon” that AE is monochromatic with probability at most 3δ”.

    2. Concerning my first point, I think I see the issue now: the statements that E and F are the reflections of B and A in AD and DE respectively should actually say that those points are the 180-degree rotations of those points around the midpoints of those lines, such that ABDE and BDFE are parallelograms rather than a kite and a rhombus. In other words, ABFE is a trapezium and D is the midpoint of BF. Now the whole construction makes sense!

      1. Based on this, I think I can shave two of the deltas off. Rather than the trapezium construction, let ABCD be as at the wiki but now let E be actually the reflection of B in AD, and let F be reflection of B in the line perpendicular to AD that bisects it. Then we again have three triangles of sides d, d and the distance required to make BC=CD=1, so the proof can proceed in the same way as on the wiki, except that now every pair of triangles shares the edge AD, so in the last step BF is monochromatic outside of epsilon with probability at most 5δ not 7δ. Thus I think we get p down to 14/29 rather than 15/31, and of course that is before iterating. Please correct me if I’m wrong.

      2. PS – yeah, I see that there is a range of d for which B and F get too close together. Working on that!

      3. Oops, that d is just what we get from the regular pentagon, for which we already have a better bound… At least this also shows that your improved reasoning works if d\ge \frac{\sqrt{5}+1}4.

      4. At least I got one thing right today 🙂 I think there is another, larger d that also works, where the orientation of BF is reversed.

        I am still noodling over further improvements.

      5. Actually I don’t see how you get that the regular pentagon comes in. The values I get for d (after giving up on getting an expression and just evaluating it in Mathematica) that give BF=1 are 1.4413 and 2.5231. However, what I forgot was that we can also have d slightly more than 1/2 and have AB and DF wrap around to the other side of AD; the value I get is 0.566889.

      6. Also there are three values for which BE becomes 1. They seem to be 0.85065 (which may be where you got the regular pentagon thing from), 0.57735 and 0.52573.

      7. A final observation on this topic for now: isn’t your epsilon event overspecified? With your trapezium construction isn’t it enough to use just that at least one of AD and DE is mono? (And in my construction, just AD?)

        I’ve been trying to do the iteration thing and I’m getting stuck on an apparent need to assume that p(d) is the largest of the probabilities, i.e. no smaller than p(BD mono) etc. How does one avoid (or guarantee) that?

      8. Urgh, sorry again, I momentarily forgot why epsilon need to have all four length-d edges…

      9. Indeed, I’ve made a mistake, your numbers got to be correct!

        Ragarding the iteration – I also got stuck there, that’s why I never wrote up the iteration part… But if we take a d for which p_d\ge 1-\delta-\varepsilon, where 1-\delta is the supremum, then every other distance’s probability is at most \varepsilon larger, and the claim should follow as \varepsilon \to 0. What do you think?

      10. I’m not sure… I just don’t have enough practice at this sort of number theory… I’m much better at the geometry aspects! Regarding that, I’m still hunting for constructions that use Terry’s original concept in more economical ways (and/or that avoid having excluded ranges for d).

        One thing I’m struggling with is whether there is any way to make useful deductions about the probability that multiple segments are mono, other than building them up segment by segment as in the current writeup. For example, if we take my construction and add a point G so that BEGF is a rectangle, we can say that if the five segments of length d and the four segments like BD are all bi (which gives at most 1/2 + 19*delta) then at least two of the others must be mono, but I can’t get a probability for that. Can you? If so, it seems like it ought to give an improvement.

  7. I’ve been trying to formulate the Prop 36 argument in terms that avoid “event” language, so as to be sure I’m covering all the cases, but I again find myself coming to a much stronger conclusion than is derived there. Please have mercy and enlighten me if I’m just not getting the hang of working with probabilities that don’t obey the normal rules.

    Let me work with my new construction from the previous post (though actually I think we can come to the same conclusion with the trapezium). The key insight in Prop 35 is that the probability that AB is mono and AD is bi is at most 2δ, i.e. twice the amount by which p(d) is less than 1/2. So we have p(ABD mono) = p(AB mono and AD mono) >= 1/2 – 3δ because p(AD mono) = 1/2 – δ and p(AD mono and AB bi) = 1/2 – 5δ because p(AD mono) = 1/2 – δ and p(AD mono and AB bi) = p(AD mono and DE bi) = 1/2 – 7δ because p(AD mono) = 1/2 – δ and p(AD mono and AB bi) = p(AD mono and AE bi) = p(AD mono and DF bi) = 1/14 and p(d) <= 3/7, no?

  8. As I was saying…

    I’ve been trying to formulate the Prop 36 argument in terms that avoid “event” language, so as to be sure I’m covering all the cases, but I again find myself coming to a much stronger conclusion than is derived there. Please have mercy and enlighten me if I’m just not getting the hang of working with probabilities that don’t obey the normal rules.

    Let me work with my new construction from the previous post (though actually I think we can come to the same conclusion with the trapezium). The key insight in Prop 35 is that the probability that AB is mono and AD is bi is at most 2δ, i.e. twice the amount by which p(d) is less than 1/2. So we have p(ABD mono) = p(AB mono and AD mono) \geq 1/2 – 3δ because p(AD mono) = 1/2 – δ and p(AD mono and AB bi) \leq 2δ. That’s a legitimate way of saying it, right?

    If so, then in the same language we obtain p(ABDE mono) = p(AB mono and AD mono and AE mono) \geq 1/2 – 5δ because p(AD mono) = 1/2 – δ and p(AD mono and AB bi) = p(AD mono and DE bi) <\leq 2δ. Right?

    But then, surely by the same logic we can obtain p(ABDEF mono) = 0 = p(AB mono and AD mono and AE mono and DF mono) \geq 1/2 – 7δ because p(AD mono) = 1/2 – δ and p(AD mono and AB bi) = p(AD mono and AE bi) = p(AD mono and DF bi) \leq 2δ. Thus giving δ \geq 1/14 and p(d) \leq 3/7, no?

  9. Btw, I’ve just realized that the <1/2 bound for monochromatic d\ge \frac{2}{\sqrt{15}} distance point pairs implies that for any triangle that has one unit side, and the other two sides are bigger than \frac{2}{\sqrt{15}}, we can find a trichromatic copy of it – with Tamas our main result last year was proving a special case of this with the combinatorial tricks of rotating the hexagonal lattice.

  10. OK… I have convinced myself that we can actually get to δ = 1/46 (or 1/50 with your trapezium structure that avoids the problematic ranges of d). Given my lack of knowledge of this area I just bludgeoned my way through the algebra from first principles and it seemed to work, as follows.

    If we assign their own δ’s to the p(mono) of {BD,DE,AF}, of BE, of BF and of EF – let’s call these p,q,r and s respectively – we derive the strengthened inequality that 29δ is at least 1/2 + 3p + q + r + s. Then δ >= 1/58 as already discussed, and it must be larger than that if any of p,q,r,s are greater than 0. But since we can draw the same structure with the segments that are currently of length d getting the current length of BD (resp. BE,BF,EF), we can say that p,q,r,s are all in fact at least 1/58. Thus, we can in fact say that 29δ is at least 1/2 + (6*(29+6))/(58*29) – and it must be larger than that if any of p,q,r,s are greater than 1/58. Obviously we are now iterating, and by considering the sums of infinite convergent geometric progressions I conclude that (as you suggested at the outset) we successfully converge to the loss of all those six “variant δ’s”, so via my construction we get δ = 1/46 and p(d) upper-bounded at 11/23.

    A problem is that this all fails if any of the distances ever goes under 1/2 during the iteration process. If d starts large, like 2, then things are fine if we stick with the trapezium construction, but unfortunately with my construction BF is shorter than d, so we can spiral down into prohibited ranges. Maybe it sometimes gets into convergent spirals that avoid those ranges, but I don’t have the strength to work out the details – the better part of valour is probably to retreat to the trapezium construction, replacing all 29’s above with 31, and accept a p(d) upper bound of 12/25.

    Please check my reasoning here and let me know if I have messed up. If it’s OK I will update the wiki.

    1. For clarity: the statement “29δ is at least 1/2 + (6*(29+6))/(58*29)” is of course two iterations in – I skipped the intermediate “29δ is at least 1/2 + 6/58”.

    2. Finally I’ve managed to catch up with this and entered all the arguments in the wiki under Prop 36 – please let me know if I’ve messed up anything or just simply change it!

      1. Looks good to me! Thanks for such a good formalisation of what my revised placing of F can and cannot deliver.

        In other news, the issue of Geombinatorics with my paper is now out. It also features Marijn’s report of the 553-vertex graph and Geoffrey and Dan’s report on graphs excluding two distances, as well as a very nice introductory piece by Soifer.

        In more other news, Dan recently told me that he and Geoffrey have extended their two-distance work in wha seems to me to be a very important way: for the additional distance 2, they have achieved chromatic number 6. I know no other details as yet but I am eagerly awaiting them!

      2. @domotorp – I’ve just seen your revision of the lastpart of Lemma 36 – I think it may be fractionally incorrect, because in addition to the three distances of my construction that tend to 2 as d tends to infinity there is also one that tends to 4, and only six that tend to infinity (four that are exactly d and two others). I changed that in the wiki, but I could not convince myself how (if at all) it changes your final calculation, so I leave that to you.

  11. I think I can get a useful upper bound on p(d) for all d (even less than 1/2) by a suspiciously simple construction. (By which I mean that yeah, I’m probably messing up again… please check the following.)

    Let ABC be a unit-edge triangle, and let D be any point such that AD=BD. Let |AD|=|BD|=d and |CD|=e, and pick D such that d \geq e. Note that d \geq 1/2 but e can be arbitrarily small. Let p(d) = 1/2 - x - y and p(e) = 1/2 - x - z, where 1/2 – x is the supremum of p over all distances.

    Now trace through the early part of the logic of Prop 35 (or 36), as follows. AD and CD are individually mono with probability 1/2-x-y and 1/2-x-z respectively, but cannot both be mono. Thus, the probability that AD and CD are both bi is at most 2x+y+z. Similarly for the probability that BD and CD are both bi. Next para: if AD is mono, then CD is not mono, and thus BD is mono outside of an event of probability at most 2x+y+z. Putting that in words that I find more natural, the probability that AD is mono and BD is bi is at most 2x+y+z.

    But wait – in this construction, p(AD and BD both mono) = 0, so that probability is simply the probability that AD is mono, which is 1/2-x-y. So 1/2-x-y \leq 2x+y+z, i.e. 3x+2y+z \geq 1/2, i.e. p(e) + 2p(d) \leq 1. Since for any distance p there exist locations for D that make d=p and also ones that make e=p, I think this means that the average value of p over all distances is at most 1/3.

    I’m not quite sure where to go from there. Clearly (if the above is not broken) this gives us new upper bounds on the lengths that d gets when we set e to values for which we already have positive lower bounds; for example I think p(\sqrt{5-\sqrt{12}} \leq 3/8 since that’s d when e=2. But can we say anything more than that?

    1. Sorry for re-use of “p”! I should have said that since for any distance q there exist locations for D that make d=q and also ones that make e=q, I think this means that the average value of p over all distances is at most 1/3.

    2. This is certainly correct, in fact, probably the proof can be shortened significantly to conclude p(e)+2p(d)\le 1 – note that this was used already in the table in the special case d=e=1/\sqrt{3}. But it is indeed a useful lemma that combines lower and upper bounds in a non-trivial way.

      1. Thanks Domotor! I have added this (in substantially abreviated form) to the wiki as Lemma 38; please edit as you see fit.

      2. So the implication is that we would have a human proof that CNP is at least 5 if we could find a human-analysable UD graph and a triple {d,e,f} that can be the distances from a point to the vertices of a unit triangle, such that there is a set of vertex pairs of the graph that includes at least one pair of all three lengths and more than 1/3 of which are mono in any 4-colouring. Do I have that right?

      3. Indeed! Just in your set of vertex pairs, there should be the same number of pairs at distance d, e and f. I suppose that there is a hope that we can also win here some small \delta. That would imply that it’s sufficient to find a 5-chromatic graph where all but 3 distances are unit, and these 3 exceptional distances are d, e and f.

      4. Ah yes – thanks for that correction. I have just added a note to Lemma 38 recording this. I agree about seeking a delta! – but I don’t yet see a way to do that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s