Collect data more efficiently with collaborative forms 🚀 (Includes 1 form for free and new permissi...
Ask questions, get answers and engage with your peers
Hello,Using the dataset named “ods-api-monitoring”, I am able to know the number of download for a specific dataset. However, I would like to perform this task every month, that’s why I want to automate this process using python. I tried to filter the dataset and take the link of the csv generated in order to upload it on python. Nevertheless, I have an error while launching the program, it seems that it has a link with the authentification on the plateform. Do we have access to an API for the dataset “ods-api-monitoring” ? Or is there another recommended way to retrieve this information?Thank you for your help.Best regards,Eva Berry
We are setting up a process to update the resources for our datasets using the Automation API. I am looking for best practice advice. Here is what we currently do, starting with a dataset_id and a file that we want to upload: Retrieve the dataset_uid using the endpoint /api/explore/v2.1/catalog/datasets/{dataset_id}. Retrieve the currently used resource_uid of the dataset using the endpoint /api/automation/v1.0/datasets/{dataset_uid}/resources, with the dataset_uid from step 1. Upload the file to the dataset using the endpoint /api/automation/v1.0/datasets/{dataset_uid}/resources/files, again using the dataset_uid from step 1. From the response I get the file_uid of the uploaded file. Update the dataset resource using the endpoint /api/automation/v1.0/datasets/{dataset_uid}/resources/{resource_uid}, with the dataset_uid from step 1, the resource_uid from step 2, and the file_uid from step 3. Republish the dataset using the endpoint /api/automation/v1.0/datasets/{dataset_uid}/pu
I’m trying to use some Openstreetmap data in ODS.However some of the data is formed where there are multiple lat/lon for an ID and when bringing it into ODS, ODS takes none of the lat/lon pairs when there are multiple. See screenshot of the multiple lat/lon pairs The records with multiple happen to be at the bottom as I think (assume) ODS uses the first 20 rows to determine a pattern. I want ODS to at least take the first pair of lat/lon so the data is somewhat accurate (at the moment it just ignores this and leaves the lat and lon fields blank. Query used for bringing in the data is here:https://overpass-api.de/api/interpreter?data=%2F*%0AThis%20query%20looks%20for%20nodes%2C%20ways%20and%20relations%20%0Awith%20the%20given%20key%2Fvalue%20combination.%0AChoose%20your%20region%20and%20hit%20the%20Run%20button%20above%21%0A*%2F%0A%5Bout%3Ajson%5D%5Btimeout%3A25%5D%3B%0A%2F%2F%20gather%20results%0Anwr%5B%22highway%22%3D%22elevator%22%5D%28-27.870644599673355%2C152.0219421386719%2C-26.
I have two datasets, one with polygons and names and one with data.I load the map in the LHS of the screen and use the refine-on-click feature within <ods-map-layer to load a HTML table on the RHS of the screen.However, I’d like to be able to have the map show the selection.I’ve tried using highlight-on-refine="true" but I suspect this doesn’t work because the data on the map is not actually being refreshed!Here is my code:!--scriptorendfragment--><div class="row ods-box"> <ods-dataset-context context="dfesmap,dfesdata" dfesmap-dataset="dfes-primary-polygons" dfesdata-dataset="dfes-2023-primary-data0"> <div class="col-md-8"> <ods-map style="height:560px" scroll-wheel-zoom="true"> <ods-map-layer context="dfesmap" display="choropleth" refine-on-click-context="dfesdata" refine-on-click
🔎 What is Parquet export?Parquet export is one of the fastest data storage formats to read and load. Its hybrid storage format (by column and by row) optimizes reading performance. ✨ What are the advantages of the Parquet export format?Ultra-fast loading, ideal for large datasets Optimized storage, up to 7 times lighter than GeoJSON export Facilitated analysis, thanks to column organization Free and universal format, available in Open Source and compatible with a multitude of tools, including data science tools, Python, and the Opendatasoft Studio no-code via an extension.👉 Try out data export in the Parquet file format now!Head to your portal, or any Opendatasoft portal (including the Data Hub) Select a dataset Go to the “Export” tab Download the dataset in the Parquet format Start exploring the data To learn moreDon't hesitate to read Opendatasoft or Parquet documentation.
I do not see any limitations on the offset and limit parameters for the exports endpoint. However, when I make the following request:https://data.longbeach.gov/api/explore/v2.1/catalog/datasets/lbpd-ripa-data-annual/exports/csv?offset=1I get back a CSV file with only 2 rows and the 2nd row says: “Streaming interrupted due to the following error: Invalid value for sum of offset + limit API parameter: 83714 was found but <= 10000 is expected. (error_code: InvalidRESTParameterError)”I thought the 10000 limit for offset+limit only applied to the records input. Is that wrong?If it’s useful, I am able to retrieve the entire dataset at once:https://data.longbeach.gov/api/explore/v2.1/catalog/datasets/lbpd-ripa-data-annual/exports/csv?offset=0
Is it possible to remove the label indicating the cluster count ? - or at least the option to style it - so the label has the same colour as the icon behind it?
We have a large segment of our population that is Asian-Indian and their primary language is Hindi. Does ODS have any plans to add more supported languages, specifically Hindi?
Hi,I understand that geopoint schemas cant be edited when they are the centre of a geoshape, but is there anyway to still be able to edit the description? Mainly to state that this point is in fact the centre of the shape and not a location point (see below) Thanks Ryan
Dear all,it appears that text search for word components doesn’t include results where the word component starts in the middle of a word element.Can you confirm this search result property and is there a possible work-around or a future feature that allows full-text search within words?ExampleText search “kirsch” finds entries with the attribute “Trauben-Kirsche” but not those with “Traubenkirsche”. URLshttps://data.bs.ch/explore/dataset/100052/table/?q=kirsch&sort=arthttps://data.bs.ch/explore/dataset/100052/table/?sort=art&refine.baumart_lateinisch=Prunus+padushttps://data.bs.ch/explore/dataset/100052/table/?sort=art&refine.baumart_lateinisch=Prunus+padus&q=kirsch
I want to display the evolution of population in all districts of Würzburg with a line chart. For that I use this dataset.In order to compare the districts, I created a series-breakdown on “Stadtbezirk” facet. But since there are 13 districts, this looks a little bit confusing. At the moment, one could unselect several districts by hand by clicking on them in the legend. In this case, it would be more convenient, if there are all but a one ore two districts unselected by default.Is there any possibility to implement this in the HTML or CSS-Code?At the moment, the code for the chart looks like this:<ods-dataset-context context="stadtbezirkehauptwohnsitzaltersgruppen" stadtbezirkehauptwohnsitzaltersgruppen-dataset="stadtbezirke_hauptwohnsitz_altersgruppen"> <ods-chart scientific-display="false" align-month="true"> <ods-chart-query context="stadtbezirkehauptwohnsitzaltersgruppen" field-x="jahr" maxpoints="0" timescale="year" series-breakdown="stadtbezirk">
Hello Opendatasoft Community,I've been exploring the capabilities of Opendatasoft for creating dynamic dashboards and interactive data visualizations. The platform's flexibility in handling diverse datasets and its user-friendly interface have been impressive. However, as I delve deeper into more complex data representations, I'm curious about best practices to ensure optimal performance and responsiveness.Specifically, I'm interested in strategies for: Efficiently managing large datasets to prevent lag in visualizations. Implementing real-time data updates without compromising dashboard speed. Utilizing Opendatasoft's API features to enhance data interactivity. Given the platform's robust features , I'm confident there are effective methods to achieve these goals.On a related note, I've been considering hardware upgrades to support more intensive data processing tasks. Would investing in an i5 gaming laptop provide the necessary performance boost for handling complex data visualiz
Studio page: If you make an bullet list, no automatic indentation is made for the individual bullet points if an enumeration extends over several lines. This does not look nice, like in this example:
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.