Organisation terrestris GmbH & Co kg



Yüklə 445,01 Kb.
səhifə5/45
tarix16.08.2018
ölçüsü445,01 Kb.
#63135
1   2   3   4   5   6   7   8   9   ...   45




Topic type





Target Type

Case Studies: Relate your experiences.





People new to open source geospatial

Technical / Developer





ID Number

1





Name

Adriˆ Mercader


Organisation

Open Knowledge Foundation


Email

adria.mercader@okfn.org





Paper Title


Building the next generation of geo.data.gov with CKAN

I can give a practical demo


yes




Paper Abstract (short)


The next version of data.gov, the main online data catalog from the US Government, will combine both non-geographic and geographic data in a single portal powered by CKAN, an open source data management system. This presentation will explore the main capabilities of the new catalog, the challenges found during its implementation, the open source geospatial tools used to overcome them, and its future plans.





Paper Abstract (long)


Data.gov is the main online data catalog from the US Government, aggregating data from across several publishers including Federal Agencies, States, Universities, etc. As part of a series of wider changes, a new version of the portal is being built, which will merge the current data.gov and geo.data.gov sites into a single catalog, hosting both non-geographic and geographic data. This combined portal will be powered by CKAN, an open source data management system. A mature and widely used project, CKAN is maintained by the Open Knowledge Foundation, a UK-based non-profit organization that promotes open access to information. The main goals of CKAN are to help publishers manage and place data online and make that data easily discoverable for users, while allowing developers to customize and extend the software for maximum re-use potential. Already used in several governmental Open Data catalogs across the world [1], CKAN will replace two existing instances currently powered by proprietary software. The implementation of the new version of geo.data.gov has posed significant challenges, from technical ones (such as harvesting and managing large numbers of datasets) to user experience and design ones (like presenting such a big amount of data in a useful and meaningful way). Data needed to be harvested from different sources across a wide range of organizations, using an authorization process compatible with existing systems in place. Metadata sources used different protocols and formats, with a significant quality disparity. The harvesting extension of CKAN provides a framework that allows building harvesters for different kinds of sources, managing them via a web interface and generating job reports. Existing harvesters for CSW servers and Web Accessible Folders were improved, allowing the import of documents in both ISO19139 and FGDC formats, and new ones were created for other sources such as ArcGIS Rest API endpoints or Z39.50 databases. Custom validation options were implemented to deal with common errors encountered, such as wrong bounding boxes, misplaced elements on the XML document, etc. Once the metadata is imported into CKAN, it follows an approval process where it can be reviewed by authorized users based on the organization it belongs to, with tools that allow bulk processing of large number of datasets. After becoming publicly available, datasets can be found via a user interface that allows full text search, filtering by bounding box, term faceting and a powerful JSON based search API that allows building third party applications and mashups on top of the catalog. Great effort has been put into making the search among such a big volume of data useful, with special work on ranking algorithms and aggregation of conceptually close datasets into collections (for example, map series) so they donÕt interfer in the main search results. The same metadata is exposed via a CSW endpoint to ensure compatibility with other geospatial software. This has been done leveraging pycsw, an open source CSW implementation, and a number of improvements have resulted from the collaboration between both projects teams. In terms of data visualization, the portal integrates with online viewers based on GeoExt and OpenLayers for common geospatial formats and services like WMS or KML, with plans to extend support for others. At the same time, existing previews for other non-geospatial formats like CSV and PDF are available, giving users access to different types of data and making the catalog useful to users without a geospatial technical background. Both the US and Canada Open Data Initiatives are committed to use and support CKAN, as well as provide an open source distribution based on CKAN and Drupal, the Open Government Platform, for other governments and agencies to meet open data and open government policies and requirements. The first version of the portal will be available in the coming weeks at the following URL: http://catalog.data.gov/ More information on CKAN and the main source code repository can be found on the following links: * http://ckan.org * https://github.com/okfn/ckan [1] http://ckan.org/instances





Topic type





Target Type

Case Studies: Relate your experiences.

Visualization: effective presentation of information.

Development: new developments in products.

Collaboration: data collection, data sharing, open standards.







People new to open source geospatial

Manager


End User

Technical / Developer






Additional Presenters


Name

Irina Bolychevsky

Organisation

Open Knowledge Foundation

Email

irina.bolychevsky@okfn.org




ID Number

236





Name

Pavel Treml

Organisation

T.G. Masaryk Water Research Institute,p.r.i.

Email

pavel_treml@vuv.cz




Paper Title


Bulk interpolation with R scripting language

I can give a practical demo


yes




Paper Abstract (short)


The contribution presents the development and application of a script for bulk assessment and interpolation of data in R scripting language.





Paper Abstract (long)


The paper explains the development and application of a script for bulk assessment and interpolation of data in the R scripting language. In practice it is often required to analyze a large amount of spatial data from multiple periods of time. First, for familiarization with the data before their processing and second, in the process of space-time analysis in the course of the research. For majority of analyzed quantities it is required to find out, if the analyzed data are changing in space, how the data change in time, whether there are errors in the processed data etc.. An optimal solution of this problem is using the R scripting language. R comes with many interpolation methods (kriging, IDW, É), possibility to use virtually any format of input data from text, csv, xls or other file formats and provides a large abundance of optional picture, plot and graphical outputs. It is also possible to generate picture and text plots for every in-situ observation location of the examined quantity, do different statistical comparison and other applications. In short time it is possible to create a simple application that after minor modification can be employed for dealing with another bulk interpolation task. Thanks to this, after one exemplary application script has been created, it is possible to efficiently create and run additional modifications of the task, wheras other solution approaches (such as using desktop GIS programs) usually require a repetition of the same steps Ð setting of color ramp and color interval breaks, creating output map lazouts, adding lists of extra data layers, transformations of shapefiles to identical coordinate system and adittional processing Ð for example statistical testing, creation of tables or creation of more plots that show a complete picture of the quantities' behavior. In our contribution, it is clearly demonstrated how the work with GIS data can be easily automated by using the R scripting language.





Topic type





Target Type

Case Studies: Relate your experiences.

Visualization: effective presentation of information.

Development: new developments in products.

New data: handling new data models, for example 3D & temporal data, or big data.







People new to open source geospatial

End User





Additional Presenters


Name

Jiri Kadlec

Organisation

Aalto University

Email

jiri.kadlec@aalto.fi




ID Number

248





Name

Pieter De Graef


Organisation

Geosparc


Email

pieter.degraef@geomajas.org





Paper Title


Bulk publishing of sensitive authentic data sources

I can give a practical demo


yes




Paper Abstract (short)


A case study and architectural consideratins on data publishing for INSPIRE.





Paper Abstract (long)


Internally an organization's data is highly secured, normalised and protected through business logic. When organizations need to publish (parts of) this sensitive authentic data to the public a series of considerations need to be taken. This presentation covers a case study and presents an architecture on how publishing for INSPIRE was achieved through FOSS components. Topics are security, ETL, filtering and data modeling. We will also highlight why OGC services may not be the best option at every turn!





Topic type





Target Type

Case Studies: Relate your experiences.

Collaboration: data collection, data sharing, open standards.







People new to open source geospatial

Manager


End User

Technical / Developer





ID Number

35





Name

Brian Low


Organisation

Canadian Forest Service, Natural Resources Canada


Email

blow@nrcan.gc.ca





Paper Title


CanadaÕs National Forest Information System and FOSS4G

I can give a practical demo


yes




Paper Abstract (short)


Canada's National Forest Information System relies on Open Source Software to address issues related to forestry





Paper Abstract (long)


CanadaÕs National Forest Information System (NFIS) was designed to address and report on issues related to CanadaÕs vast forests. Issues related to sustainable forest management, conservation of genetics, forest disturbances, as well as viewing the most current and consistent information on the forests are all addressed within NFIS. NFIS is a distributed spatial data infrastructure (SDI) focussing on forestry. Nodes are spread across the country at federal, provincial and territorial centres. All of these nodes rely on ÒFree and Open Source SoftwareÓ and internationally adopted standards like OGC and W3C. The majority of these nodes consist of an Open Source stack from operating systems to databases to Web containers and presentation layers (Linux, Apache, OSGeo, etc). This presentation will focus on the general NFIS infrastructure and one new application, the Long-Term Research Installation Catalogue (LTRIC). LTRIC is a database of established Canadian Forest Service and collaboratorÕs long term forest research sites. The catalogue was implemented using PostgreSQL/PostGIS, Mapserver and OpenLayers, OpenStreetMaps along with other open source products. A discussion on the choice of FOSS4G technology and their use will be given.





Topic type





Target Type

Case Studies: Relate your experiences.

Business Cases: building the economic case.

Collaboration: data collection, data sharing, open standards.





People new to open source geospatial

Manager


End User

Forestry, Natural Resources, Geospatial





ID Number

160





Name

Mark Jackson


Organisation

CERC


Email

mark.jackson@cerc.co.uk





Paper Title


Carbones.eu: a FOSS geoportal for atmospheric science

I can give a practical demo


yes




Paper Abstract (short)


A case-study of a big-data geoportal for a EU carbon cycle research project (www.carbones.eu) created by CERC (the presenters), publishing the scientific results using only FOSS components, in a web portal which provides map animations, over one million maps with user-editable colour scales, and graphs, embedded in a Content Management System, reading data directly from the scientific data files on the server (in OGC compliant netCDF), and publishing them with INSPIRE-compliant OGC web services (discovery, view and download); we will describe both the custom FOSS code created for the project and the third-party FOSS components (MapServer, Postgresql, ncWMS, OpenLayers, Alfresco)





Paper Abstract (long)


We present a case-study of a geoportal for an EU FP7 atmospheric science project (www.carbones.eu) created by CERC (the presenters). The science project has produced a global reanalysis of carbon fluxes and pools over a twenty-year period, which can be used by climate scientists to improve their models. The geoportal was built entirely from FOSS components under a business-friendly license, and provides the scientific results as highly-interactive maps and graphs. The intended users are both the general public and climate scientists, who can use the results to improve their modelling of climate change. Scientific users, including sponsors of geoportals from rival projects, have made very generous comments about the geoportal, such as ÒCongratulations on the great user interface, a dreamÉ an extraordinary way to deliver informationÓ and ÒThe mapping works flawlessly and is very useful. The time series are great and very intuitiveÓ. The map features (visit www.carbones.eu and choose Product-Maps) include animations playing directly in the map interface, user-editable colour scales for the raster data, PDF creation and export of animations and maps to KML for use in Google Earth. The maps are created directly on the server from the scientific data files, in OGC compliant CF netCDF format. netCDF is a portable, compact, fast, flexible, extensible binary format for gridded data (raster data). It is also the preferred format for this data among the scientific users, so we were able to use the same data files created for the scientific end-users directly as input for the geoportal. The portal provides over one million map layers from 500GB of netCDF data (Òbig dataÓ). The graph features (visit www.carbones.eu and choose Product-Time series) include a slider control to allow interactive selection of the date range on display, PDF creation and data download. The geoportal has been embedded in a FOSS Content Management System (Alfresco) so that team members without web-development skills can edit the static content (text, images, etc.). The geoportal serves the data through INSPIRE-compliant web services, including discovery (metadata through OGC CS-W), view (WMS), and download (direct access to the netCDF through HTTP). The third-party components used for the project include some well-known components (Postgresql, MapServer, OpenLayers, Alfresco) and some less well-known. In particular ncWMS proved very useful: a 100% Java open-source GIS Server designed for geoportals providing scientific data from netCDF files. It provides WMS directly from CF-compliant netCDF, supporting user-editable colour scales, time-series of values at a point, export to Google Earth, and animations. During the Carbones project CERC were able to suggest some bug-fixes to ncWMS which have now been included in the main trunk of the ncWMS project by the developers (University of Reading e-Science department). ncWMS includes a generic web user interface, but for Carbones we decided to create our own custom user interface in JavaScript. This allowed us to create a specific user-interface tailored to the Carbones data, and also make some improvements over the ncWMS interface: we have a fluid layout to make use of the whole browser window, with splitter controls to allow the user to resize individual page elements; the choice of layers is presented through a tree control with explanatory balloon tooltips; we present more information about the layers than ncWMS makes available including the absolute minimum and maximum values in the original data; our legends are more elegant and user-friendly than the ncWMS default legends. The Carbones geoportal code has been published as FOSS by CERC (carbones.googlecode.com).


Yüklə 445,01 Kb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   45




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə