Personal tools
  • We're Hiring!

You are here: Home Community Minutes Meetings May 2009 OME European User's Meeting

May 2009 OME European User's Meeting

Live notes from the meeting


Day 1


 - Spencer - 9:30 to 9:58
  -- "Bio is an ill-posed ontology problem..." (Spencer's life)
  -- bediside <--> bench
  -- "the link between imaging & omics is emergent: functional molecular dynamics"
  -- Imaging technologies span the biological fields from macro to micro
  -- 1900s used to teach doctors to distinguish syphillis
  -- imaging is a horizontal walk / omics are a vertical
  -- everything is about trying to automate (reproducibility, statistical significance, ...)
  -- opera, etc. open for use.
  -- another level of mgmt: linking the information, onotology, vocabularies
  -- transparency & value from the data
  -- also note the management of technical resources (Pasteur Platform Management System)

SUMMARY: World’s 1st imaging platform developed at Pasteur. Trying to walk across all the 
imaging contexts, versus the genomics ‘talk’ in one narrow domain. Original data important, 
knowledge base evolves and allows for new interpretations. Last 5 years about trying to 
simplify image data management.

 - Jason 10:00 to 10:34
  -- afternoon comes down to defining what we want to discuss tomorrow
  -- the entire stack: length of time to get something from data model change to the clients
  -- one word: interoperability
  -- bioformats: 1.5 developers, 70 formats
  -- reducing the hybrid server (Beta2.3) down to a single, supported, Ice-based platform (Beta4)
  -- expansion of metadata completion (5 formats fully), more work coming on this
  -- collaboration with OMERO.web
  -- distributed repositories of image data using OMERO.fs
  -- ImageJ integration now working and being improved in the future
  -- SPW support coming with 4.1
  -- Looking for feedback on: Functionality, Priorities, Release Cycle, Application Domains, 
    File Formats, and Strategic Directions

  -- Questions:
   ---  Q: Can you suggest a file format for how I should store my data? One answer: OME-TIFF
    ---- Support in ImageJ, OMERO, and various other software
    ---- Is it two files then? Can be, but not necessarily.
    ---- Still not perfect, but some issues with validation
    ---- Note OME-HDF

 - Cat: 10:38 to 10:41
  -- User-centered design : here to record anyone involved in the project
  -- Integrating inteactive design, user expecatations, managing large teams
  -- Want to get users more involved
  -- Working with others in scientific software development to specify methods

SUMMARY: Overview of the OME history bringing us to our current OMERO release (4.0). 
A look at where we've been, where we're going, and how the community can help us move forward.

- Christoph Best (EBI):  10:42 to 11:02
 -- Electron Microscopy Data Bank
 -- Their work shows the extreme end of microscopy resolution down to the protein level
 -- Computational iterative reconstruction of 3D protein structure from many projections
 -- Electron Tomography: aimed at 3d reconstruction of nonometer structures
  --- Series of images with a "TILT" dimension (rotating the specimen)
  --- Visual (?) proteomics consisting of statistics of many acquisitions  
-- The electronic image data bank available online (ebay-like interface for electron images)
  --- working on more online visualization of images using OMERO tools
-- Looking for more rich dataset models (cF. RDF as possible model)
-- Want to find a way to use OMERO for quality assessment of their statistically created 
    3d models
-- Whether or not to store the original data used for the 3D reconstructions
-- Need access to computational resources (grid/cloud)
 --- data upload / distribution : remote filesystem? (browser uploads ~2G limit)
 --- Everyone is building own cluster. How should one access a central cluster?
-- Use of OMERO.web
 --- Virtual communities of scientists
 --- P
-- Established field with similar protocols etc. (Central database, re-use)
 --- OME as a standard format, in-house processing
-- use:
 --- 100 uploads per 
 --- getting about half of images produced
 --- 400 sessions per month (unique IP per day)

SUMMARY: A good overview of reconstructive image microscopy from EM and Tomography 
technologies. The big questions are how to use the OMERO to help with the sharing, 
display, construction, and validation of this data.

- Emma Hill (JCB) : 11:03 to 11:24 (coffee until 11:45)
 -- Using the JCB data-viewer, a Glencoe tool based on OMERO.web, JCB is able to 
  ensure the data integrity of the manuscript figures & images they receive.
 -- Gave several examples of data manipulated by the authors
 -- Which file formats should JCB be supporting?
 -- Went over the benefits of the JCB Data-viewer: Showcase the data, archive, sharing, 
  integrity, funding requirements.
 -- Downloaded PDFs have a large comment "original data is available" with URL
 -- Requirements from funding agencies are varied, but are 
  there "...should be made as widely available as ... "
 -- Of interest:
  --- Should DV be a/the standard for publication of image data?
  --- Should the data be more available? (Not just JPGs)
 -- Future functionality:
  --- Greater user access (clients?)
  --- Direct upload from microscopes
  --- Adaption by other journals
 -- Questions:
  --- Congratulations
  --- Where is it stored? (Hosted servers) Multi-gig movies? Yes.
  --- Can users access the original data? No.
   ---- Would like to encourage authors to share the original data
   ---- Have tried to not put initial authors off.
   ---- Would do so at the advice of the comunity
  --- Each dataset is 2 Terabytes.
   ---- Bigger problem is the data transmission
   ---- First have to get published
  --- Christoph: Amazing how much money has been put into data grids and 
  no one is using them
SUMMARY: Overview of the JCB data-viewer, its benefits, and the reasons 
 for storing original image data along side publications. Looking for 
 input on future functionality and direction.


 - Karsten (PE) : 11:48 to 12:06
  -- Columbus system based on OMERO
  -- Not just a "pure" screening system anymore : how to get screening, assay, analysis together
  -- Next step is connecting the secondary analysis with compound information
  -- One central component between HCS, volocity/ultraview (research), 
operetta (assay development)
  -- Interfaces to established products
   --- Pipeline pilot (accelrys)
   --- idbs
   --- dax archive server
 -- better overview in insight. interface to acapella, heatmaps
 -- Looking into next Columbus, using SPW; API stability for supporting accelrys, idbs
  --- Adding a native interface for Beta4 interactions (high-speed)
  --- @Need better integration of analysis results. Storage of overlays, results. 
As well as archiving strategy
  --- Web access to structure and data
  --- SPW in insight
  --- @Support for FLEX files, including analysis results
  --- Also: stability is important, standardization of analysis
  --- And for insight: 
   ---- Search & Filter (#! of images or Plate)
   ---- clear screen acquisition selection (model is too flexible)
   ---- rendering settings for whole plate (not per well). It can be per well as long as
 we offer way to modify settings globally
   ---- annotation download folder, all documents as annotations (e.g. .docx). Content 
of files does not need to be indexed
    ----- structured annotations are an important development. room to place information 
that's not interesting for whole community.
   ---- remove ugly crosses. Should the image be deleted at import time if no viewable?


 - Karol Kozak (ETH) : 12:09 to 12:44
  -- open source alternatives for high content screening. final goal is to link software 
  -- instrument mgmt -> acquisition -> processing -> normalization/QC 
        -> storage -> archiving -> data mining -> bioinformatics
  -- data automation: library handling, robotics, ... (data flow)
  -- organizing all the different file folder structures (from different systems) for 
the users
  -- integration then of archiving multiples the problem
  -- focus on data management tool for being able to cut link to original data
  -- 2 x 25 TB (enough for 6 weeks)
  -- most challenging component has been the buffer server. developed script for moving 
to server. has to be scope/screen aware @
  -- need software for: library handling, QC, data mining, visualization, export, 
flexibility (using HCDC)
   --- Eclipse / R / Weka / KNIME (
   --- (un)supervised learning
   --- checking various classifiers to see which is most robust
  -- other software
   --- mapass2 workflow (now called Maq)
   --- cytoscape
  -- image processing is the next step of node development (cF. accelrys) looking to 
do this in an open source way
  -- linking to OMERO database as a reader/writer. Need clarifications
  -- webinar on June 04, 2009. Workshop Oct. 16-17, 2009 , Zuerich
  -- Question:
   --- What specifically does a connection to OMERO look like?
    ---- Unsure at the moment. Difficult question.
    ---- Sharing the results between HC/DC and acapella? It's possible. Setting up concepts
   --- Using JPEG as primary image? (Karsten)
    ----  Not for image processing. That is done on raw data. Only visualization.
    ---- Moved to archiving after hit list. Can be moved back. Has never been asked for.
   --- How are you handling the number crunching? (parallelizing/scheduling/scaling)
    ---- communication between nodes is a matrix (ram, zipped file, or db)



 - Matthias Dunda (c.a.r.u.s.) : 12:45 to 13:00
  -- generally provide interfaces for anything a customer might need
  -- needed to replace an interface (WSDL) inside of the Opera to replace aan existing
 commercial storage unit
  -- beta4 is of interest because:
   --- speed.
   --- using local network services (samba, nfs) rather than web services
   --- parallel import of images (queued multithread import)
  --  working on European Screening Port (
   --- for academic or charity-based organizations who cannot afford screening systems
  -- visor: "bedrock of drug discovery" for combining various screens and high-throughput
   --- links: omero, trixx, spotfire, genedata screener, proprietary database
  -- needs:
   --- stability
   --- full SPW support in GUI
   --- screening metadata support : structured annotation is already a big help 
(point for discussion)
   --- method for archiving : (re-import?)
  -- Questions:
   --- Which metadata? (things attached or in file) Ans: coming from data acquisition 
and specific links to other systems (LIMS) currently using StructuredAnnotations
   --- "Full support for SPW"? Passing buck to the open-source community to bundle up 
for commercial context?
    ---- Ans: Mainly concentrating on server-side issues.


 - Oliver Mueller (Saarland) : 14:38 to 14:49
  -- FLIM, TIRF, FRET, ... cardiac patholgies
  -- previously -> USB/FTP -> workstation
  -- started using OMERO 2007, Beta 2.2
  -- issues with the use of OMERO:
   --- "Oh I forgot to import my files"
   --- "hurry up, it's my turn"
   --- "remove your files from the system"
  -- solution: automated import of image data
  -- "atom" (small tool written in java)
  -- monitoring time interval to reduce CPU use
  -- wish list
   --- atom --> omero.fs
   --- "update" functionality for metadata: checkout->edit->submit (possibly drag-n-drop)
  -- Question:
   --- How do you detect if an import is unfinished? (optional files) Ans: only support 
what import supports.
   --- Available for the community? Yes. From the website.


 - Imtiaz Kahn (Cardiff) : 14:49 to 15:13
  -- Starting new project to use OMERO starting in June
  -- Working on project called MICLAD: to describe cellular "mezo" behavior, as opposed 
to the molecular behavior often captured in OMERO 
  -- An attempt to standardize dynamic cell behavior data, but need checklist from the community
  -- Standards on Metadata: OME + MIACA + MIFlowCyt
   --- Represent the experimental design, material, and methods
   --- MICLAD for defining the dynamic cellular behavior
   --- Enocdes spatial-temporal information from images into lineage format (alphanumeric)
   --- cF. ProgeniDB
   --- language for communicating with mathmeticians/modellers
  -- xytzg data : where g is each gene range in [0, 25000], x, y, z, t [0, 100]
  -- integrated view of the biological system, including dosages, etc. over time
  -- "Biomedical informatics without borders: from collaboration to implementation"
  -- Questions:
   --- Q: Are all images the full g:[0-25000] range (in terms of visualization)
   --- A: Inconclusive. There are attempts to visualize it, but there are static v. 
dynamic data.
   --- Q: we (ome) would love pointers to ontologies in any of these fields (cancer, etc.) 
which seem to be consuming.
    ---- Is there anything you see we should be working with or on incorporating?
    ---- EBI: minimal information index of all the formats as they mature


 - Jerome Avondo (John Innes) : 15:13 to 15:25
  -- 14 biologists, 6 computer scientists working on quanitfying/modelling,  the relationship 
between genes and growth/shape in plants
  -- confocal and optical projection tomography
  -- result: "major axis of growth over time", "nr. of trichomes v. time"
  -- 600 MB per scan, 512^3 x 4 at 16 bit tiffs (OPT)
  -- 50-60GB, timelapse up to 5 days.
  -- ~ 2.7 TB in 2 yrs. 3.5 G / day
  -- two systems (one is very secure), each with OMERO installed.
   --- first: raw data, rate & tag, and metadata.
   --- transport rate/tags to second system for computer scientists to process 
   --- ~800 core cluster
   --- distributing the storage based on some relationship (based on rating?)
  -- Question:
   --- How many people? Just the lab
   --- Shadow casting with flurescence? Yes. (emitted or absorbed light)
   --- User acceptance? Not sure. Any change is good, and so far is very good.
   --- How do you store the volumes? Just store as another image.
   --- OPT scans are home-grown data acqusition? Confocal is good for small scale. Needed 
larger scale. Developed in Edinburgh. (Niche file format, but growing)
   --- How much is in hosue development? Almost all. But released to public as produced.


 - Gareth Howell (Leeds) :  15:30 to 15:51
  -- New to OME/RO, but would like to see how it is used,
  -- DV/LSM510 shop
  -- Studying the formation of blood vessel branches brought on by growth factors often associated with diseases.
  -- need to analyze GFP dots, tubule length, branching points, ...
  -- currently use imagej and imaris, would like to use OMERO
  -- imaging facility develops image analysis protocols and macros
  -- no servers or mining systems
  -- OME to enable group leaders to keep track of data in a transient environment
  -- Questions:
   --- Automatic or manual tracking of regions of interest? Currently manual process (or use manager in ImageJ)
   --- What do you do with ROIs? You'd need enough analysis for it to be statistically significant. Attach anything. considered as sub-image. 


Day 2

 - Analysis
  -- Viewing results : heat maps
  -- Storing results
  -- Running routines: they need to define the UI themselves
  -- Bernd: where do you stop analysis. open up raw data first (partek)
  -- Karsten: just having the final results is not enough. need to describe cell. (independent of screening)
  -- Martin:  Showing the current state of things
  -- Catrina : biologist. interested in the end result. want to use platform to not just handle the images, routine analysis, launched from insight, simple menu
   --- essential: results (frap curve, processed image,...) kept __track__ of within omero) 
   --- two things: high throuput idea (2 measurements for multiple samples), variable-particle tracking initially limited number of condiitions
   --- omero should have capacity to both
   --- some standard, some custom.
   --- also a more advanced way to launch
   --- not just attachments. integrated into the database
   --- usability work is essential
  -- donald:
   --- core routines
   --- matlab/cellprofiler. Martn: things linked to the image? yes.
   --- do we need to have a set of routines?
  -- ROI
   --- queryability?   
  -- Bernd: not advisable for OMERO to do data analysis
   --- from all the different users, it boils down to workflows on ROIs or images (KNIME/Taverna)
   --- leave OMERO as a blunt image database
  -- Chris
   --- working with the data is not currently possible (just attachments)
   --- strongly typed system in OME-Perl
  -- Christoph: try to separate out things. storage (protocol in astronomy, HDF5, think as matrices)
  -- Chris: are people willing to use?
  -- Catrina: In relational database
  -- Bernd: analysis results are numerical results and they should be dealt with outside of OMERO
   --- quality control, and the next level, and eventually sequence data,...
   --- The purpose of OMERO should be to store images, ROIs, accessing raw data.
   --- Other data analysis should be separate, retrieve, store, and make available
  -- Catrina: 
  -- Chris: it can't be a deadend, how does someone else use it
  -- Karsten: only possible if the access it.
  -- Donald: ways to access the other storage 
  -- Martin: currently you can't get to the deadlink. because you can't organize all the data in your lab
  -- Curtis: practical approach. pick 3-5 use cases and what do we need to solve these in one year. [talking about interoperability; cell profiler]
  -- Donald: some of these are really easy (deconvolution, proejction)
  -- Chris: Not enough to just jam blobs in
  -- Catrina: is there a base number of tools so I don't have to import it everywhere.
  -- Donald: have to provide more infrastructure (basic scripts) and a library to make this usable
  -- Martin: if you say that, we haven't got beyond the import stage
   --- How many people are using OMERO? No one. Can't use scripts, 
   --- "Constantly told you are an exceptional script"
   --- We can't you use it.
  -- Catrina: Before Beta4 it was not usable.
  -- Matthieu: workflow issue - don't want to duplicate data. If only put it in, then can't get it back out.
  -- Katrina: leave it on storage.
  -- Bernhard: it works for Columbus. But for more users, you need more plugins (interoperability)
  -- Matthieu: everyone has their own plugin
  -- Martin: there's no point if you can't export OME-Tiff
  -- Chris: don't export it, bceause no one uses it. no on uses it because you can't export it.
  -- Martin: "not enough", but if you have 5 formats, you can import them all, and then convert.
  -- Curtis: you can use bio-formats today
  -- Matthieu: point of OME-TIFF is that you can read the planes. convert and then can only read it with bf??
  -- Martin: everyday thing that's not there.
  -- Katrina: high-priority to do these basic things.
  -- Ask software vendors to do it.
  -- Matthieu: OME-TIFF is pretty useless at the moment.
  -- Chris: ImageJ we can solve. Won't talk to metapmorph.
  -- Cat: can we send you a survey, and get a list
  -- Bernd: show stoppers: data duplication. Examples of using interfaces in R, Matlab, Java/Taverna
  -- Brian: getting data in, data duplication, dead end.
  -- Donald: is it true that it's so broken? don't do analysis
  -- Martin: at a startup level, it's the two things. don't want to duplicate, but if you can't do anything with it.
  -- Josh: Export is still an issue
  -- Katrina: but i want to use omero to organize.
  -- Jean-Marie: showing people their file system
  -- Gianluigi: OMERO.fs is a filter to the rest of the world. mimetypes from OMERO.fs saying "here's for the rest of the world"
   --- the beginning step for federation

 - after coffee
  -- stray things
   --- display usage? yes. pie charts. Usage per user is there. User per group?? Imperial - some users in multiple groups
   --- quotas
   --- data to grant codes
   --- exporting a whole project
   --- archiving: the flag? seemed ok. can i do it from insight? Not sure.
    ---- why would you want to not see the archived rows? (Gianluigi: stages?) Still seeing the thumbnail. Dangling pointer.
    ---- Martin: hiding is separate from archiving.
    ---- getting feedback from OMERO.fs on which stage storage osomething is on. (speed: red-yellow-green)
    ---- Katrina: possible to have OMERO.fs do something for you??
    ---- moving files between OMERO.fs instances (events?)
    ---- Matthias: export project to CD -> space is free -> re-import. More interesting, importing into another system.
     ----- firewall prevents two servers from being connected to one another [ final cut pro ]

 - scripts
  -- counting cells
  -- frap: curve, ...
  -- shape of blobs
  -- deconvolution (psf)
  -- basic image processing
  -- fret
 - rendering: randombox, glows, saving gradient maps as files, ... saving LUT color map
 - roi (open)
  -- sources
   --- manual in imagej
   --- segementation algorithm
   --- zeiss/lsm & array scan & metamorph
   --- acapella: small binary masks.
  -- uses:
   --- scale bars
   --- properties (intensities)
  -- queryability (complexity) 
   --- regions that overlap, "give me all the images that have ROI with certain value" (flur. intensity)
   --- ... certain shape, diameter, ...
   --- created yesterday
   --- colocalization? percentage? yes.
   --- flag a group of ROIs.
   --- 3D ROIs? 5D. or at least 4D. Surfaces? Mask like approach is better. (especially higher dimensions) <- Karsten. Jerome -> also happy.
   --- masks even above polygons.
 - linkage/workflows: not a priority 
 - just storage
  -- not against it (Bernd) but long term focus is something else. Matthias agrees. displaying all the forms is too complicated.
  -- external ids
 - data in/out (workflow)

- Afternoon
 -- Things we've got to fix
  --- DataIn
   ---- Need sample data. Send us your broken import files.
   ---- Speed? (no comment)
   ---- File formats? Based on samples with full metadata. 
    ----- Martin: 
     ------ rather than just not importing, could offer a 'minimal metadata import' option.
     ------ Brian: next version will have config file so you can turn on/off import formats.
    ----- Partial imports? Difficult.
     ------ But for a certain class of exceptions, possible to import with minimum metadata
    ----- And without re-import? Overwriting the metadata? Only if a failed import?
    ----- Compatibility-mode import and re-import
    ----- Does this encourage having users to import without metadata? (cF. jpg/png issue)
    ----- Having feedback automatically upload the file? YES!
   ---- File-stitching : putting all the timepoints t/z back together
  --- DataOut
   ---- OME-Tiff: everywhere (+web) 
   ---- What happens to annotations/pdfs/...? Option prompt when exporting image to ome.tiff.
  --- DataDupe
   ---- Supporting NAS systems, ZFS. Deciding on polling based infrastructure
   ---- Need reference sites!

-- Release schedule
 --- First release, Oct 1.
  ---- Things to fix (above) 
  ---- Simple guide on how to analyze with ImageJ/matlab/... (here's how you do a thresholding)
  ---- Will need testing with actual users.
  ---- demo server? community-run test servers? (no expectations, we promise)
 --- Second release, xxx
  ---- Doing something/science/discovery in OMERO: some set of basic functionality
   ----- Spot-finding, size, etc. See the list from morning session.
  ---- Note: as soon as sites start using OMERO, the software will come.
  ---- Getting other commercial software to talk to OMERO (e.g. metamorph)?
  ---- Supporting open source projects really well?
  ---- Regions of Interest
 -- General process
  --- testing
  --- docs (feedback!)
  --- beta sites: scripts for doing system testing
   --- Pasteur, Imperial, Cambridge, Loci
 -- 3 to 5 years?
  --- EM & other applications. Supporting images that are data.
  --- Scientific publishing and image repositories
   ---- A European archive?
  --- Back in Paris, new analysis and visualization tools
  --- Integration with many differents systems (XML, etc.)
  --- Two sites as well integrated as Dundee (Developers there? Workshops?)
  --- Jean-Marie said:....bring food if you're staying at Josh's place, but don't worry about the chocolate!
  --- Seamless flow of data into JCB
  --- Image processing platform in the future
  --- Reaching all the other folks that are about to start creating yet another image database
  --- Presence at more conferences ISBI

Support Files

Slides and Images from the meeting to download

Document Actions