Perform joins with large datasets using the new processor 🎉
Ask questions, get answers, and engage with your peers
Don't know where to start?
Initiatives produced by our community.
Documentation, Academy and other useful resources
APIs documentations, templates, and widgets
Hi!Im aware that users can subscribe to datasets, given us a easy method to contact, but could we possible add a subscribed filter for users within the data catalogue? This way users can take advantage of the feature as a “favourite” kind of functionality, and aid with their navigation. I think it would also increase the % of users that then subscribe to datasets, making it easier for us as admins
Ann Lutman is listed in your data base as arriving in Australia on the Persian in 1827. I think that was her departure date, I think it was 10th April 1827. I have no problem with this which is what I think did happen.If I search your data base further using Ann Lutman’s maiden name of Ann Williams I find she sailed on the Sovereign ship July 12 1827 and again as Ann Williams on the Louisa ship she August 21 1827.Are these 3 entries in fact all about the same person.Among my records I have 2 different convict numbers 47935 also 43766.I carn’t remember where or how I now have TWO numbers for her. Do your records include Convicts number which might help my search? Yours, Laurie Hudgson
Hello Opendatasoft Community, I’m Prakash Hinduja, I’m currently working on a dataset that needs to stay up to date, and I’d like to schedule automatic data updates in Opendatasoft.I’d really appreciate any guidance, examples, or tips from your experience. Thanks in advance for your help! RegardsPrakash Hinduja
From September 30, our extraction system for FTP and SFTP sources will be updated to ensure better consistency between your published datasets and their original source.🔎 What’s changingCurrently, extracted files are cached even if they are deleted from the source. With the new process, only files that are present in your external source at the time of republishing will be kept. Deleted files will no longer be retained in Opendatasoft.👉 Actions to take before September 30If there are deleted files still stored in your cache, you can:Keep them by adding them back to your source, or by exporting your current dataset and reintegrating it into the source. Permanently delete them by clearing the cache for the relevant datasets, starting from a clean slate. 🚀 Benefits of this updateFaster and smoother loading in the back office Data is always aligned with your external source Improved traceability of extraction errors💡 Need help?Contact your Customer Success Manager, our Support
I have a page:https://cityobservatory.birmingham.gov.uk/explore/dataset/test-emergency-admissions-for-copd-icp-outcomes-framework-birmingham-and-solihull-wards/insightWhich has 4 filters on that I want t o affect the majority of the visuals. However, for the map I want this to ignore anything made to the 3rd filter “area_name” but respect all other filters. How can I achieve this?I have tried the following either as its own context or as part of the main one at the start of the page but it just wont listen<ods-dataset-context context="mapctx" mapctx-dataset="{{ ctx.dataset.datasetid }}" mapctx-parameters="{ 'refine.date'= ctx.parameters['refine.date'], 'refine.time_period_range'= ctx.parameters['refine.time_period_range'], 'refine.ethnicity'= ctx.parameters['refine.ethnicity'], 'refine.imd':'ALL' }">
The form feature keeps evolving to offer you more flexibility and improve the quality of collected data: 👉 Learn moreHaven’t created a form yet? A free form is still included by default. Just ensure the “forms” permission is enabled in your back office to access the feature. 🙌
Hi everyone,I’m Dario Schiraldi currently working on a project where I need to join multiple datasets, and I’d love to hear your suggestions on the best practices for performing data joins. Specifically, I’m interested in methods for both SQL and Python.I’d appreciate any insights, tips, or resources you have! Thanks in advance!RegardsDario Schiraldi CEO of Travel Works
It would be great for datasets which are not set to public to be marked as “private” on the Explore page view.Currently as an administrator, I have to click “Edit” and then to go to the security tab to see if this is set to be on:Access restricted to allowed users and groupsIt would be awesome if on the Explore page it had some sort of corner marker or indication that the dataset is in someway restricted.A couple of example options to mark a dataset
Does anyone have any tips on how to shape data in our portals to work better with the AI tools? The concern from our leadership is someone will ask a question and if the data contains too many filterable items it could return an incorrect result.Is there any guidelines on how to better shape the data to make it easier for AI to understand and provide results?
The previous limit of 100,000 records is gone!Enrich your data now using large datasets with large reference datasets such as France’s SIRENE (a directory of ID codes for every business in the country)— without compromising performance! 👉 To try out the new processor, check the user documentation.👉 To understand potential impacts, watch this video.
Are there plans to add more mapping capability for Studio Maps - for example clusters, dots and shapes and heat maps - same functionality that is shown in Map Builder?
As I’m trying to create a page with the code editor, I’m facing an issue with the refinement of datasets.To automate a line chart for different contexts, so that I can select one dataset of a list of many datasets and use the same ods-chart tag, I want to refine all the contexts before referencing on them in the chart widget. But the important aspect of my question is this: For this refinement, it would be more convenient for me to define the values that should be excluded, instead of the ones that should be included.Is there a possibility to do so?Thank you all in advance! :)
Hi,My name is Prakash Hinduja (Hinduja Family Swiss) , I want to know about the Schema management for datasets and why it is important . Regards:Prakash Hinduja (Hinduja Family Switzerland)
Hello,Using the dataset named “ods-api-monitoring”, I am able to know the number of download for a specific dataset. However, I would like to perform this task every month, that’s why I want to automate this process using python. I tried to filter the dataset and take the link of the csv generated in order to upload it on python. Nevertheless, I have an error while launching the program, it seems that it has a link with the authentification on the plateform. Do we have access to an API for the dataset “ods-api-monitoring” ? Or is there another recommended way to retrieve this information?Thank you for your help.Best regards,Eva Berry
Hello everyone,I’m Kamal Hinduja from Geneva, Switzerland. I’m new to this community and look forward to contributing positively to the discussions while learning from your insights. Could someone please explain how Open data soft integrates with public datasets? Thanks in Advance!Kamal Hinduja Geneva, Switzerland
Need to discuss a more specific issue? Contact our support team and we’ll be happy to help you get up and running!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.