Perform joins with large datasets using the new processor 🎉
Ask questions, get answers and engage with your peers
Have you activated all the latest features released on your Opendatasoft portal in 2025? Catch up with our summer recap to stay fully up to date before the new season—especially if you missed our latest newsletter. 📮🇫🇷 Newsletter recap (FR) 🇬🇧 Newsletter recap (EN)
They are now fully compatible with the latest version of DCAT-AP-CH (v2) and its recent updates.🚀 By using this metadata template, your portals will be fully harvestable by opendataswiss, the Swiss national portal. Indeed, opendataswiss has recently updated its metadata validation system in line with DCAT-AP-CH v2.⚠️ Until opendataswiss migrates to opendata.swiss next (planned for 2026), importing non-compliant metadata on opendataswiss will remain possible but will trigger error emails. We therefore recommend updating your DCAT-AP-CH metadata now for optimal harvesting.💡 What should you pay particular attention to?opendataswiss checks are now more thorough. All mandatory metadata must be present and accurate. The compliance of recommended and optional properties, if provided, is also checked. Make sure to review the completeness and accuracy of your metadata. the dct:license metadata replaces dct:rights. The dct:license metadata refers to the terms of use of datasets on opendataswi
We have a few datasets that are logs of time - telemetry sensors is an example. At this point in time we can only display data up to a minute level and want to be able to display (and have exportable) to the second.
I have a page:https://cityobservatory.birmingham.gov.uk/explore/dataset/test-emergency-admissions-for-copd-icp-outcomes-framework-birmingham-and-solihull-wards/insightWhich has 4 filters on that I want t o affect the majority of the visuals. However, for the map I want this to ignore anything made to the 3rd filter “area_name” but respect all other filters. How can I achieve this?I have tried the following either as its own context or as part of the main one at the start of the page but it just wont listen<ods-dataset-context context="mapctx" mapctx-dataset="{{ ctx.dataset.datasetid }}" mapctx-parameters="{ 'refine.date'= ctx.parameters['refine.date'], 'refine.time_period_range'= ctx.parameters['refine.time_period_range'], 'refine.ethnicity'= ctx.parameters['refine.ethnicity'], 'refine.imd':'ALL' }">
Hi!Im aware that users can subscribe to datasets, given us a easy method to contact, but could we possible add a subscribed filter for users within the data catalogue? This way users can take advantage of the feature as a “favourite” kind of functionality, and aid with their navigation. I think it would also increase the % of users that then subscribe to datasets, making it easier for us as admins
Hello Opendatasoft Community, I’m Prakash Hinduja, I’m currently working on a dataset that needs to stay up to date, and I’d like to schedule automatic data updates in Opendatasoft.I’d really appreciate any guidance, examples, or tips from your experience. Thanks in advance for your help! RegardsPrakash Hinduja
From September 30, our extraction system for FTP and SFTP sources will be updated to ensure better consistency between your published datasets and their original source.🔎 What’s changingCurrently, extracted files are cached even if they are deleted from the source. With the new process, only files that are present in your external source at the time of republishing will be kept. Deleted files will no longer be retained in Opendatasoft.👉 Actions to take before September 30If there are deleted files still stored in your cache, you can:Keep them by adding them back to your source, or by exporting your current dataset and reintegrating it into the source. Permanently delete them by clearing the cache for the relevant datasets, starting from a clean slate. 🚀 Benefits of this updateFaster and smoother loading in the back office Data is always aligned with your external source Improved traceability of extraction errors💡 Need help?Contact your Customer Success Manager, our Support
The form feature keeps evolving to offer you more flexibility and improve the quality of collected data: 👉 Learn moreHaven’t created a form yet? A free form is still included by default. Just ensure the “forms” permission is enabled in your back office to access the feature. 🙌
Hi everyone,I’m Dario Schiraldi currently working on a project where I need to join multiple datasets, and I’d love to hear your suggestions on the best practices for performing data joins. Specifically, I’m interested in methods for both SQL and Python.I’d appreciate any insights, tips, or resources you have! Thanks in advance!RegardsDario Schiraldi CEO of Travel Works
It would be great for datasets which are not set to public to be marked as “private” on the Explore page view.Currently as an administrator, I have to click “Edit” and then to go to the security tab to see if this is set to be on:Access restricted to allowed users and groupsIt would be awesome if on the Explore page it had some sort of corner marker or indication that the dataset is in someway restricted.A couple of example options to mark a dataset
Does anyone have any tips on how to shape data in our portals to work better with the AI tools? The concern from our leadership is someone will ask a question and if the data contains too many filterable items it could return an incorrect result.Is there any guidelines on how to better shape the data to make it easier for AI to understand and provide results?
Are there plans to add more mapping capability for Studio Maps - for example clusters, dots and shapes and heat maps - same functionality that is shown in Map Builder?
As I’m trying to create a page with the code editor, I’m facing an issue with the refinement of datasets.To automate a line chart for different contexts, so that I can select one dataset of a list of many datasets and use the same ods-chart tag, I want to refine all the contexts before referencing on them in the chart widget. But the important aspect of my question is this: For this refinement, it would be more convenient for me to define the values that should be excluded, instead of the ones that should be included.Is there a possibility to do so?Thank you all in advance! :)
Hi,My name is Prakash Hinduja (Hinduja Family Swiss) , I want to know about the Schema management for datasets and why it is important . Regards:Prakash Hinduja (Hinduja Family Switzerland)
Hello,Using the dataset named “ods-api-monitoring”, I am able to know the number of download for a specific dataset. However, I would like to perform this task every month, that’s why I want to automate this process using python. I tried to filter the dataset and take the link of the csv generated in order to upload it on python. Nevertheless, I have an error while launching the program, it seems that it has a link with the authentification on the plateform. Do we have access to an API for the dataset “ods-api-monitoring” ? Or is there another recommended way to retrieve this information?Thank you for your help.Best regards,Eva Berry
Hello everyone,I’m Kamal Hinduja from Geneva, Switzerland. I’m new to this community and look forward to contributing positively to the discussions while learning from your insights. Could someone please explain how Open data soft integrates with public datasets? Thanks in Advance!Kamal Hinduja Geneva, Switzerland
I have two datasets, one with polygons and names and one with data.I load the map in the LHS of the screen and use the refine-on-click feature within <ods-map-layer to load a HTML table on the RHS of the screen.However, I’d like to be able to have the map show the selection.I’ve tried using highlight-on-refine="true" but I suspect this doesn’t work because the data on the map is not actually being refreshed!Here is my code:!--scriptorendfragment--><div class="row ods-box"> <ods-dataset-context context="dfesmap,dfesdata" dfesmap-dataset="dfes-primary-polygons" dfesdata-dataset="dfes-2023-primary-data0"> <div class="col-md-8"> <ods-map style="height:560px" scroll-wheel-zoom="true"> <ods-map-layer context="dfesmap" display="choropleth" refine-on-click-context="dfesdata" refine-on-click
We would like a way for our users to favorite a dataset, as well as provided feed back as to what they liked about it.
I would like to garner some ideas on how y’all manage updating static, flat file datasets on a regular basis.Most of ours pull from the source via API. But a few require us every week/month/year to download a flat file from the source to then re-upload to our platform.At first I thought a simple calendar with what needs doing when would be a good idea. But, some datasets dont drop on a regular schedule or can be delayed. So if I have a calendar entry saying “Download X data today” and its not there. It doesnt get done and I forget that ive not actually done that. So I need some way to check it off.How do you all efficiently manage your flat file updates? What tools or methods do you use?
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.