Personal tools
  • We're Hiring!

You are here: Home Community Minutes Meetings June 2010 Paris User's Meeting Meeting Notes

Meeting Notes

Notes taken by the OME Developers during the meeting. If you have any additions we would be delighted to receive them.

Quick Summary of Day 1

  • Lunch was good!
  • The power went.

General Summary of Day 1

  • During Jason's introduction to OMERO, there was a good (preliminary) discussion about OMERO.fs and how the remoting of filesystems can & should work. The OME team pointed out how the FS solution will entail some loss of control from OMERO over the contained files. It was also pointed out that there are at least two separate scenarios: lab vs. production facility.
    • The OME team also asked if the community would be interested in providing more detailed feedback to on server startup, for example including the recent contents of the session, event and eventlog tables. There seemed to be some interest, as long as the security of the system was guaranteed, a topic that was gone into in some depth including password-policies and certificate authorities (oh my)
    • Finally, the topic of workflows was opened with the request that the various pre-processing steps before binary data reaches OMERO be recorded as metadata-only XML files with some form of links between them. The OME team, in general, would like to work with others with more expertise in workflows.
  • After Jerome Avondo's talk on VolViewer, a 3D viewer which he is integrating with OMERO both as a client and a backed, there was a request for both streaming movie download and javascript controls of the movie creation.
  • Martin Spitaler and Mark Woodbridge presented an overview of their careful OMERO rollout at Imperial, providing a good checklist for other sites that may want to go down the same route. The most important missing feature seemed to be the storage of sample preparation metadata on import.
  • Martin Horn presented Knime, which has experimental support for reading from OMERO, but not yet support for writing back. In later discussions with Josh and Bernd Jagdla, the steps needed to run Knime workflows as scripts were worked through.
  • Pavel Tomancak presented Fiji, and mentioned looking forward to finding more connections between the two projects. The OME team was interested in parallel/headless execution for using Fiji as a background executor. Progress has been made on that front, and an effort is being made to encourage plugin writers to use annotations to prevent the intermingling of GUI and processing code.
    • 2 scenarios were brought up by Ela regarding analysis. One can be doing 1 plate or 400 plates. It would be good to be able to save parameters from an analysis run to run on further data. An example might be producing output graphs as pdfs for visual comparison. Pavel pointed out that that may be a good point of integration with OMERO and/or Knime, and Bob Murphy mentioned that in many/most cases the interactive approach will not scale.
    • Jason pointed out that a significant difference in Fiji is the curation of plugins, but on the other hand, this can lead to hostilities in the community since many people have traditionally liked ImageJ because there are few/no rules. Nevertheless, there was strong support for the idea of a peer-review system in order to work toward reproducibility. Carlos asked if there wouldn't be a way to have multiple repositories like /etc/apt/sources.list; Johannes responded not yet.
  • Aaron briefly presented libBlitzBioFormats, a C++ library using Jace, which makes calls to BioFormats. See
  • After lunch, Stefan Schek presented Leica's work on a white-box system, intended to provide links to the outside world. To do this, Leica has produced a C++ library which writes OME-TIFFs on the fly. The export process can be configured on a per experiment basis via xml.
  • Karsten Kottig presented Perkin Elmer's experience with building a commercial solution, Columbus, on top of OMERO. There is a certain difficulty perceived with the various look-and-feels between the clients. It was asked if there had been any thought of integration with micromanager. Karsten responded, "No. Should there be". On hearing, "yes", he said ask again next year. In addition to there being several things that need to be cleaned up in the schema, like the addition of camera orientation, Karsten suggested that there need to be a client or use-case for each field in the model. Orientation, for example, is needed for stitching, but it's less clear what NDFilter might be needed for.
  • Among other things, Bob Murphy presented PSLID's ( efforts to use OMERO as the backend for the PSLID server, and said that getting plugged in was amazingly fast. On being asked by Jason where the analysis/classification work is going (3-5 years), Bob said the complexity of the questions we can ask is steadly increasing from "is it possible to distinguish x from y?" 10 years ago, to much higher order issues. The priority is trying to understand the space of possible patterns that one can discern automatically.
  • To close the first day, the group compiled a list of issues that need further discussion on day 2: (starred items were slated for discussion on day 2)
    • FS/Data Access/Duplication *
    • OME Data model/OME-TIFF *
      • NDIM/stitched/sparse
      • Updates, Support, Maintenance
      • New modes, stitching, ROI/masks
      • Experimental data model
      • Validation/levels
      • Spectral/SPIM: definitive definition
    • ‘Publish’: exposing data in OMERO
      • .js, Access for reviewers, etc.
    • Big images
    • Visualization (VolViewer)
    • LSIDs (unique identifiers)
    • Real scaling (100s of users, etc.)
    • Analysis *
      • Results storage (HDF5,per cell, nosql), Functional description
    • Hierarchy/search/organ./onto.
    • Workflows(comp)/labbook(exp)
    • Archiving
    • Webstart/SSO/sysadmin/teaser
    • Using OMERO/Simple API/REST
    • Clients: ImageJ retrieval by content, process-all-tags, agent reuse , pull v. pull/push
    • Named featured sets
    • (New) Developer resources
    • Plugins & auto-upgrades

Day 2 : Discussions

  • NDIM

    • Issues: specific vs. generic dimensions, sparseness, stitching
    • Decisions:
      • Spatial dimensions and features will be split into two separate arrays. Spatial dimensions are of the form: "x", "y", "t", "z", "angle", etc. Features consist of a single array of pixel types. Metadata about the actual layout on disk may need to be added to optimize access from bio-formats.
      • The addition of a specific "orientation" dimension didn't seem to have much support; instead the OME team will look into prepairing N-spatial dimensions (perhaps backed by an ontology) Orientation should still be added to the model elsewhere.
      • In order to support stitching, deconvolution, and other transformations, a container (still unnamed) will be added which maps inputs to a single output, and includes a description of the transformation process. A "reference image" should perhaps be considered as a part of the object. Further, the links between images should have: description, namespaces, annotations; could also solve the issue of filesets (e.g. lei); and might should provide support for describing the tiling of big images.
      • Rather than supporting full spareness, the transformation object will be used to tie together objects with measurements of different dimensionality.
    • The decisions made seemed to support the use cases provided by people attending, including EM, superresolution, FLIM, and SPIM.
  • Analysis and the storage of results

    • Issues: should efforts be made to prevent a (partially-/initially-)stanadardized API/model for representing the results from analysis?
    • Highlights:
      • Interaction with some workflow tool seemed key, and Knime was often mentioned as a candidate.
      • Anne Danckaert (Pasteur) mentioned that for IP the storage of the (textual) description of the results (not the results themselves) is key.
      • At the same time, Bernd (also of Pasteur) brought up that the existing software (knime, omero, fiji) seems to provide much toward a working solution.
      • Ela mentioned that Open BIS has needed in some cases to index by gene names and other common searches as well as introduce user preferences ("actions") for only displaying certain options to users.
      • The request from Bernd to allow parsing of workflows (running other workflows) such that the change of a workflow can trigger a re-run of many inputs was tabled.
      • A possible Column constructor that the group came up with is: "Column(source, version, type, name url/description)"
      • After the meaning, Anne Danckaert brought up: query by results, finding all tables with results of a given type, finding all columns with a given name, and manually entering results.

Provided wish lists (most items will become tickets)

  • Adminʻs Perspective

    • OMERO.webadmin: red warning „login failed“ should disappear if user re-tries
    • manual editing of configuration files should be sufficient (some are modified by „setup routines“ -> manual modifications get lost)
    • we need an export function (i.e. take defined images completely out of omero and store it on Tape to free disk space on our RAID)
    • we need a real delete function (disk space!)
    • load balancing of OMERO servers
    • script based on which imports a directory adding the directory name as a tag
    • per-user / site-wide policy settings to prevent bogus installs, prevent non-ssl connections
    • test macports on mac server
    • client webstart
    • per-user disk quotas
  • Userʻs Perspective

    • Jean-Marie: Can I rename images, datasets, projects in OMERO.insight?
    • Chris: Can I connect ImageJ to OMERO.server?
    • Jean-Marie: Can I drag the slider, while a movie plays?
    • Jean-Marie: Why does my movie always start at Image #1? Iʻd like to place the slider somwhere and start playing the movie from this position ...
    • Jean-Marie: OMERO.insight: sometimes, the folder structure was not refreshed, I could not see my subdirectories ... After going top-down manually, it worked ...
    • Melissa: Leica LIF files do not work - Melissa: TILL LA TIFF files do not work
    • Chris / Josh: Why do I have two separate programs (importer / insight)? I want one tool for all instead of many clients ... (i.e. OMERO.importer to import images, OMERO.insight to attach protocol files and statistics ... to complicated)
    • project + dataset is necessary ... why not just one „folder“? (biologists see folders and think in terms of „folders“ -> project == root directory; dataset == sub-folder)
    • I want to share my data with my collegues using OMERO.insight ... Not possible with Beta 4.1.1 ...
    • SSL tick box needs more explanation. Login always uses SSL, only the communication afterwards is encrypted or not.
    • read-only plates to prevent adding bogus data
    • click on plate/dataset and get a table of general statistics
    • delete web shares
    • online (web) editing of metadata, please
    • plate heatmaps, dot plots
  • Developerʻs Perspective

    • Images are NOT imported into project/dataset and cannot be found under "Hierarchies" using OMERO.insight
      • instead, images can be found under "Images" using OMERO.insight
      • where the library.setDataset() functionality is or how it has been replaced ...
    • Which Packages do I have to include (minimum) to make my client run? Currently I have the entire OMERO trunc in my class path ...
    • Ability to execute a script in parallel
    • Clients should not assume a local disk in order to be webstart compatible.
    • import API
    • support for multiresolution
  • Misc

    • Annotating Instrument
    • Central wiki page (or similar) where all "OMERO extensions" (third party and other OMERO applications) are listed

Will's Notes

Pavel Tomancak

  • They use TrakEM2 (Fiji) or "CATMAID" to browse 'Big Images' stitched from multiple 2k x 2k TEM images (serial sections).
  • CATMAID is google-maps with Z stacks, using JPEG image pyramids, Javascript and Postgres:
  • TrakEM2 can be used to generate the image pyramid.
  • Use these tools to annotate neurons through Z stacks.
  • TrakEM2 is used to stitch the images, but does not save as a single big image. Simply saves the transformation matrices.
  • Also mentioned itk (C++ image processing library but it is hard to use. Used by BioImageXD app (3D image viewer and analysis in Python).

Grant Calder (LM facility manager, John Innes Centre, Norwich - with Jerome)

  • Uses Fiji (not TrakEM2 plugin) to stitch images together and save as a big image (mostly works with confocal scopes).
  • But images are not tiled (follow a structure) so the gaps are filled with black and the final image is bigger than the raw data.
  • Typically stitches 1k x 1k images and gets 5k x 5k images.
  • Been 'watching' OMERO since it was just OME.
  • General impression (at facility managers meeting) is that OMERO is (too) hard to install.

Ralf Palmisaro? (Microscope facility manager? Medical research)

  • Asked about security: E.g. can I limit logins to within our institution?
  • Seemed pretty happy about the 4.2 permissions options.
  • He is working with a group at a Fraunhofer institution (not sure which one, see to develop NEW segmentation algorithms since they could not find any existing ones that worked well with their data (bacterial infection counting/intensity).
  • Was interested in C++ API. Will ask his student to look at our docs.
  • Gave him demo - OK.

Martin Spitaler

  • Would like tagging on import. Including auto-tagging based on path E.g. import /siRNAi/timelapse/image.lsm gets tagged with "siRNAi" and "timelapse".
  • Wants to configure "Archive Original File" option to always be true, and remove the option from the UI, so he KNOWS that all users have a copy of the file in OMERO. Then he can delete client-side copies. Currently, even with his own data, he keeps client-side copies of files in case he forgot to tick the option. Would also like to see which files in the import queue are set to Archive Original File (extra column in the table).
  • Configure image browsing. E.g. view same thumbnails in two panels, each with different rendering settings.

Jeremy Maris (Sysadmin, Sussex Uni - with Bernie)

  • Discussed archiving, original files, data dupe
  • He wasn't aware of the "Archive Original File" option on import.
  • For him this solves problem of keeping the original file outside of OMERO.
  • Not too bothered about doubling the data in OMERO.
  • Would like to be able to move pixel data and original files to archive storage, while leaving metadata and thumbnails in OMERO
  • Then, if user tries to open an archived image, they get some message "See your sysadmin...". This is fine with him.
  • He and Bernie interested in using OMERO for an imaging course.

Samual (at Pasteur)

  • Doing analysis with Matlab
  • Bacterial lineage under various conditions.
  • Needs to run on cluster (big analysis) but can't afford the Matlab licenses!
  • Considering switching to octave


  • Finding memory limit problems with using OpenAstexViewer as a web applet
  • Even if the data transfer is reduced, OA viewer converts all data to floats
  • Not all browsers allow you to increase memory allocation
  • Currently have problems with data cubes above 200x200x200
  • Could try to use binary maps (no contour adjustment on client) and get OA viewer to use bits instead of floats.
  • OR, even have OMERO calculate the 'marching cubes' and send these??
  • Going to try and populate OMERO with all EMDB entries and hook up OA viewer to view maps in web-client.


  • OMEGA project to use OMERO (images), KNIME (processing) and openBIS (results storage).

Pavel Tomancak (again)

  • Demo of OMERO with respect to their SPIM data
  • They have raw data as XYZ lsm files, one per time-point and angle.
    • spim1/spim-tp1-angle0.lsm
    • spim1/spim-tp1-angle45.lsm
    • spim1/spim-tp2-angle0.lsm etc
  • having done reconstruction algorithm, they generate volumes over time, as tiffs in folders of /tIndex/zIndex/ spim1/01/001.tif
  • Want to be able to import /spim1/stage2/image.lsm to Project: spim1 Dataset: stage2 automatically (thousands of images).

Paul Barber

  • One feature we might need if moving his stuff (or similar) to scripting service is to 'remember' the last values of each parameter, so you can run a particular script multiple times and adjust parameter values one at a time. Screenshot here:
Document Actions