📜
Vesuvius Challenge
  • Background
  • 2023 Grand Prize Awarded
  • Overview
    • 2024 Prizes
    • The Master Plan
  • Prize Winners
  • Community Projects
  • Tutorials & FAQ
    • Tutorial
      • 1. Technical Overview
      • 2. Scanning
      • 3. Segmentation
  • FAQ
Powered by GitBook
On this page
  • 2024 Grand Prize​
  • First Automated Segmentation Prize​
  • 3 First Letters Prizes + First Title Prize​
  • Monthly Progress Prizes​
  • Terms and Conditions​
  1. Overview

2024 Prizes

Previous2023 Grand Prize AwardedNextThe Master Plan

Last updated 10 months ago

This year, the community goal is to read 90% of our scanned scrolls. There are a number of prizes towards this goal:

  • $200,000 - - Read 90% of Scrolls 1-4

  • $100,000 - - Reproduce the 2023 Grand Prize result but faster

  • 4 x $30,000 - - Find the title of Scroll 1, or first letters in Scrolls 2, 3, or 4

  • $350,000 - - Open ended prizes from $1,000-20,000

Details below! This is a lot of information, and we want to help - bring your questions to our or to .

2024 Grand Prize

The $200,000 2024 Grand Prize will go to the first person or team to read 90% of Scrolls 1-4, using the following criteria:

  1. Segmentation

  • Compute the total surface area (in cm2) of papyrus sheets present in all four complete scrolls combined.

    • We may compute and specify this value ourselves as the year progresses, in which case you can skip this step.

  • Compute the same measure for the papyrus sheets actually segmented in your submission. You must segment 90% or more of the total from all scrolls (not per-scroll).

  • Segments should be flattened and shown in 2D as if the scroll were unrolled. Each scroll is ideally captured by a single segmentation (or each connected component of the scroll) rather than separate overlapping segmentations.

  • Segments should pass geometric sanity checks; for example, no self-intersections. We will evaluate to measure distortion.

  1. Ink detection

  • Generate ink predictions on the segments.

  • The entire submission is too large to transcribe quickly, so the papyrological team will evaluate each line as:

    • ✅ readable (could read 85% of the characters),

    • ❌ not readable (couldn't),

    • 🟡 maybe (would have to stop and actually do the transcription to determine), or

    • 🔷 incomplete (line incomplete due to the physical boundaries of the scroll)

  • 90% of the total complete lines (incomplete lines will not be judged) must be either 🟡 maybe or ✅ readable. Multiple papyrologists may review each line, in which case ties will be broken favorably towards the submission.

As a baseline, here's how the 2023 Grand Prize banner would have scored:

Total lines: 240. Complete lines: 206. Passing lines: 137. Pass rate: 137 / 206 = 67% (needs to be 90%).

More and larger segmentations are needed, as well as improvements to ink recovery. Already both fronts are moving forward!

In 2023, we got from 0% to 5% of one scroll. Reading 90% of each of our scrolls will lay the foundation to read all 300 scrolls. The 2024 criteria are designed to be as permissive, flexible, or favorable as possible within the high level objective of reading the scrolls. Favorable adjustments will be made if required (for example, if it is discovered conclusively that a scroll does not contain writing). Our mission is to read scrolls. We want to award this prize!!

Multiple submissions are permitted, and we can provide feedback for early or partial submissions. If no team meets the above criteria by the deadline, Vesuvius Challenge may award the prize to the team(s) that were closest. These and other awards are at the sole discretion of Vesuvius Challenge.

We're awarding $100,000 to the first team to autosegment the 2023 Grand Prize region in Scroll 1, with quality matching or exceeding the manual segmentation from 2023 (comparable legibility and number of letters recovered). A second place $12,500 prize is also available for the second team to achieve this.

We will judge the segmentation by the following criteria:

  • Inputs. Maximum 4 hours of human input and 48 hours of compute (in any order). Existing 2023 Grand Prize segments (and overlapping ones) represent significant human input, so can only be used as inputs or training data if memorization is eliminated. Reach out to us for approval if you want to do this. Segments from elsewhere in the scroll can be used!

  • Outputs. Our Segmentation, Technical, and Papyrological Teams will evaluate the segmentation:

    • Geometric checks: Single continuous segmentation. Manifold. No self-intersections. Can exceed 2023 Grand Prize banner, but must cover 95% of the 2023 Grand Prize banner.

    • Segmentation accuracy. Assessed by the Segmentation Team and the Technical Team.

    • Flattening. You don’t necessarily have to implement flattening (it is provided in at least Volume Cartographer), but if you do, it should be comparable to 2023 Grand Prize results.

    • Ink detection. Comparable or better ink detection as determined by our Papyrological Team. You can use the open source 2023 Grand Prize winning solution for this.

A submission requires:

  • Ink detection image. Submissions must include an image of the virtually unrolled segment, showing visible and legible text.

    • Submit a single static image showing the text region. Images must be generated programmatically, as direct outputs of CT data inputs, and should not contain manual annotations of characters or text. Using annotations as training data is OK if they are not memorized by the model, for example if you use k-fold validation.

    • Include a scale bar on the submission image.

    • You may use open source methods (for example the 2023 Grand Prize winning submission) for ink detection.

  • Texture image. Include a texture image showing the papyrus fiber structure of the segmented region.

    • You may use Volume Cartographer’s vc_render or other existing tools to do this.

    • This image must be aligned with the above ink detection image and have the same dimensions.

  • Segmentation files. Provide the actual segmentation.

    • Expected format: A mesh file along with a UV map defining the flattening.

  • Proof of work. Your result should be reproducible using approximately 4 hours of human input and 48 hours of compute time.

    • Provide evidence of this. For example, a video recording the manual parts of your process.

  • Methodology. A detailed technical description of how your solution works. We need to be able to reproduce your work, so please make this as easy as possible:

    • For fully automated software, consider a Docker image that we can easily run to reproduce your work, and please include system requirements.

    • For software with a human in the loop, please provide written instructions and a video explaining how to use your tool. We’ll work with you to learn how to use it, but we’d like to have a strong starting point.

    • Please include an easily accessible link from which we can download it.

We’re issuing 3 more First Letters prizes, as well as a First Title Prize!

  • First Title in Scroll 1: $30,000 1st place, $7,500 2nd place

  • First Letters in Scroll 2: $30,000 1st place, $7,500 2nd place

  • First Letters in Scroll 3: $30,000 1st place, $7,500 2nd place

  • First Letters in Scroll 4: $30,000 1st place, $7,500 2nd place

For Scrolls 2, 3, and 4, $30,000 will be awarded to the first team that uncovers 25 letters within a single area of 10 cm2, and open sources their methods and results (after winning the prize). $7,500 is also available for the second team to meet this bar.

The purpose of these prizes is to close the gap between our current state of the art and the seriously challenging 2024 Grand Prize. Last year we showed it is possible to recover text from a single rolled scroll. Generalizing these methods across multiple scrolls and scans will make them more robust, which will be needed to read the complete library.

A submission requires:

  • Image. Submissions must be an image of the virtually unrolled segment, showing visible and legible text.

    • Submit a single static image showing the text region. Images must be generated programmatically, as direct outputs of CT data inputs, and should not contain manual annotations of characters or text.

    • Specify which scroll the image comes from. For multiple scrolls, please make multiple submissions.

    • Include a scale bar showing the size of 1 cm on the submission image.

    • Specify the 3D position of the text within the scroll. The easiest way to do this is to provide the segmentation file (or the segmentation ID, if using a public segmentation).

  • Methodology. A detailed technical description of how your solution works. We need to be able to reproduce your work, so please make this as easy as possible:

    • For fully automated software, consider a Docker image that we can easily run to reproduce your work, and please include system requirements.

    • For software with a human in the loop, please provide written instructions and a video explaining how to use your tool. We’ll work with you to learn how to use it, but we’d like to have a strong starting point.

    • Please include an easily accessible link from which we can download it.

  • Hallucination mitigation. If there is any risk of your model hallucinating results, please let us know how you mitigated that risk. Tell us why you are confident that the results you are getting are real.

    • We strongly discourage submissions that use window sizes larger than 0.5x0.5 mm to generate images from machine learning models. This corresponds to 64x64 pixels for 8 µm scans. If your submission uses larger window sizes, we may reject it and ask you to modify and resubmit.

  • Other information. Feel free to include any other things we should know.

Your submission will be reviewed by the review teams to verify technical validity and papyrological plausibility and legibility. Just as with the Grand Prize, please do not make your discovery public until winning the prize. We will work with you to announce your findings.

We’re awarding an estimated $350,000 this year in monthly progress prizes for submissions that get us closer to reading 90% of Scrolls 1-4. These prizes are more open-ended, and we have a wishlist to provide some ideas. If you are new to the project, this is a great place to start. Progress prizes will be awarded at a range of levels based on the contribution:

  • Gold Aureus: $20,000 (estimated 4-8 per year) – for major contributions

  • Denarius: $10,000 (estimated 10-15 per year)

  • Sestertius: $2,500 (estimated 25 per year)

  • Papyrus: $1,000 (estimated 50 per year)

We favor submissions that:

  • Are released or open-sourced early. Tools released earlier have a higher chance of being used for reading the scrolls than those released the last day of the month.

  • Actually get used. We’ll look for signals from the community: questions, comments, bug reports, feature requests. Our Segmentation Team will publicly provide comments on tools they use.

Things we’d love to see – these encompass a wide range of award levels. Check back as we’ll update this list!

  • Ink refinement: Other prizes target new ink findings, but we also want to see improvements to existing passages where ink recovery is poor

  • 3D/volumetric ink detection: and other creative approaches

  • Improved documentation: updates to tutorials and introductions

    • Multi-axes viewports & segmentation

      • Allow free-3D (not just XYZ) rotation with updating viewports

      • Ability to continue segmentation at any angle of rotation

    • Flattened segment preview

      • Live or near-live rendering from OME-Zarr volume

    • Ink detection preview

      • Live or near-live rendering from OME-Zarr

      • Live or near-live updated ink detection viewport, similar to the flattened segment preview

      • Interchangeable ink detection models

    • Import and Display pointclouds in VC

      • For inspection

      • For segmentation

    • Novel segmentation algorithms

    • GUI improvements

    • Bug fixes

      • Eliminate dot residue: dots that were moved occasionally appear to remain in their original location until a view reset

      • Ability to save active changes to disk while Segmentation Tool is active

      • Disable re-saving for display-only segments; ask the user whether to save changed segments upon leaving "compute" mode

    • Better solution for sheet stitching

    • Improved point cloud extraction

    • Improved mesh reconstruction

  • Mesh merging: tool to combine multiple overlapping segmentations into a single, continuous mesh (manifold and non-intersecting)

  • Mesh chunker: tool to translate a set of input meshes into a set of annotated cubes of configurable size. Each cube should represent/store each intersecting papyrus segment as its own mesh (continuous, manifold, non-intersecting)

  • Compressed areas: improved segmentation methods for regions of compressed papyrus

  • Live previews: show live segment previews during segmentation and allow refinements to update the preview

  • Scan analysis: analyze multi-energy/multi-resolution scans to identify optimal scan settings

  • Volume registration: automated deformable techniques to align different scans (resolution, energy) of the same object

  • Visualization tools

  • Segmentation inspection tools

  • Performance improvements: so these steps can handle larger segmentations:

    • Segmentation

    • Flattening

    • Rendering

    • Ink detection

Prizes are awarded at the sole discretion of Scroll Prize Inc. and are subject to review by our Technical Team, Segmentation Team, and Papyrological Team. We may issue more or fewer awards based on the spirit of the prize and the received submissions. You agree to make your method open source if you win a prize. It does not have to be open source at the time of submission, but you have to make it open source under a permissive license to accept the prize. Submissions for the First Automated Segmentation Prize, each First Letters Prize, the First Title Prize, and the 2024 Grand Prize will close once the winner is announced and their methods are open sourced. Scroll Prize Inc. reserves the right to modify prize terms at any time in order to more accurately reflect the spirit of the prize as designed. Prize winner must provide payment information to Scroll Prize Inc. within 30 days of prize announcement to receive prize.

Part of the 2023 Grand Prize banner scored using this rubric ().

The deadline is 11:59 pm Pacific December 31, 2024. When you are ready, see the .

First Automated Segmentation Prize

The showed that we can extract significant regions of text from inside a sealed Herculaneum scroll - but to scale up, these methods need to be significantly faster and cheaper. This prize asks you to reproduce a year of work in much less time. It sounds crazy, but we believe this is possible using improved automation!

The 2023 Grand Prize banner segments that this prize aims to reproduce.

Submissions must be made by 11:59pm Pacific, December 31, 2024. Make your submission using .

3 First Letters Prizes + First Title Prize

The first letters discovered in Scroll 1 in 2023.

We are also awarding $30,000 to the first team to discover the title of Scroll 1, and $7,500 to the second team to find it before it is announced. For more information about where to look for the title, see the .

Submissions must be made by 11:59pm Pacific, December 31, 2024. Make your submission using .

Monthly Progress Prizes

Pull requests to or standalone resources both accepted!

Volume Cartographer - ,

Enable 3D (XYZ) inspection of scroll volume (similar to )

Enable live or near-live inspection of fiber continuity (similar to )

:

More information in the

Submissions are evaluated monthly, and multiple submissions/awards per month are permitted. The next deadline is 11:59pm Pacific, July 31, 2024! Submit using .

Terms and Conditions

2024 Grand Prize
First Automated Segmentation Prize
First Letters and First Title Prizes
Monthly Progress Prizes
Discord community
our team
​
stretch metrics
submission instructions
​
2023 Grand Prize
this form
​
announcement of the First Title prize
this form
​
our website
EduceLab
spacegaier fork
Khartes
Khartes
ThaumatoAnakalyptor
roadmap
this form
​
full banner