Perform joins with large datasets using the new processor 🚀
Ask questions, get answers and engage with your peers
Hi everyone,I’m Dario Schiraldi currently working on a project where I need to join multiple datasets, and I’d love to hear your suggestions on the best practices for performing data joins. Specifically, I’m interested in methods for both SQL and Python.I’d appreciate any insights, tips, or resources you have! Thanks in advance!RegardsDario Schiraldi CEO of Travel Works
It would be great for datasets which are not set to public to be marked as “private” on the Explore page view.Currently as an administrator, I have to click “Edit” and then to go to the security tab to see if this is set to be on:Access restricted to allowed users and groupsIt would be awesome if on the Explore page it had some sort of corner marker or indication that the dataset is in someway restricted.A couple of example options to mark a dataset
Does anyone have any tips on how to shape data in our portals to work better with the AI tools? The concern from our leadership is someone will ask a question and if the data contains too many filterable items it could return an incorrect result.Is there any guidelines on how to better shape the data to make it easier for AI to understand and provide results?
The previous limit of 100,000 records is gone. 🎉Enrich your data now using large datasets with large reference datasets such as France’s SIRENE (a directory of ID codes for every business in the country)— without compromising performance! 👉 To try out the new processor, check the user documentation.👉 To understand potential impacts, watch this video.
Bonjour, la gestion multilingue n’est possible que pour 6 métadonnées du schéma standard ODS et pour aucunes métadonnées des schémas spécifiques DCAT ou autre.A Bruxelles, nous devons publier ces métadonnées en français et en néerlandais (langues locales officielles) et idéalement en anglais (langue internationale) et permettre aux utilisateurs ou autres institutions belges et européennes de les harvester.Est-ce que des développements sont prévus au niveau de la gestion multilingue des métadonnées chez tous clients Opendatasoft ?Comment configurer du mapping automatique de champs du schéma standard vers un schéma spécifique ?Merci d’avance pour votre réponse,Nadia
Are there plans to add more mapping capability for Studio Maps - for example clusters, dots and shapes and heat maps - same functionality that is shown in Map Builder?
Hi!Im aware that users can subscribe to datasets, given us a easy method to contact, but could we possible add a subscribed filter for users within the data catalogue? This way users can take advantage of the feature as a “favourite” kind of functionality, and aid with their navigation. I think it would also increase the % of users that then subscribe to datasets, making it easier for us as admins
Bonjour, Nous avons un souci pour les widgets qu’on développe en page custom.En effet, quand on essaye de les intégrer dans une page, la fonction height=”fit-content” n’est pas prit en compte. Bonjour,Nous rencontrons un problème avec les widgets que nous développons pour les pages personnalisées. En effet, lorsque nous essayons de les intégrer dans une page, la fonction height="fit-content" n'est pas prise en compte.Nous avons rencontré ce problème notamment pour les résultats des élections. Quelqu'un aurait-il une solution ?Je vous remercie. Je vous remercie
The “follow dataset” option is great but can be time consuming and sujbect to errors. To mitigate for this, an example of features : For each dataset, allow the admin to chose between sending notifications automatically or manually For the automated part, allow the admin to chose the frequency of notifications : “for every update” (might ne be ideal for datasets updated daily), “max once a day”, “max once a week”, etc For each dataset, allow the admin to save a default text, ideally with multilingual option and rich text instead of plain text
We would like a notification email or otherwise when we are nearing the capacity of the portal limits.
As I’m trying to create a page with the code editor, I’m facing an issue with the refinement of datasets.To automate a line chart for different contexts, so that I can select one dataset of a list of many datasets and use the same ods-chart tag, I want to refine all the contexts before referencing on them in the chart widget. But the important aspect of my question is this: For this refinement, it would be more convenient for me to define the values that should be excluded, instead of the ones that should be included.Is there a possibility to do so?Thank you all in advance! :)
Hi,My name is Prakash Hinduja (Hinduja Family Swiss) , I want to know about the Schema management for datasets and why it is important . Regards:Prakash Hinduja (Hinduja Family Switzerland)
Hello,Using the dataset named “ods-api-monitoring”, I am able to know the number of download for a specific dataset. However, I would like to perform this task every month, that’s why I want to automate this process using python. I tried to filter the dataset and take the link of the csv generated in order to upload it on python. Nevertheless, I have an error while launching the program, it seems that it has a link with the authentification on the plateform. Do we have access to an API for the dataset “ods-api-monitoring” ? Or is there another recommended way to retrieve this information?Thank you for your help.Best regards,Eva Berry
Hello everyone,I’m Kamal Hinduja from Geneva, Switzerland. I’m new to this community and look forward to contributing positively to the discussions while learning from your insights. Could someone please explain how Open data soft integrates with public datasets? Thanks in Advance!Kamal Hinduja Geneva, Switzerland
I have two datasets, one with polygons and names and one with data.I load the map in the LHS of the screen and use the refine-on-click feature within <ods-map-layer to load a HTML table on the RHS of the screen.However, I’d like to be able to have the map show the selection.I’ve tried using highlight-on-refine="true" but I suspect this doesn’t work because the data on the map is not actually being refreshed!Here is my code:!--scriptorendfragment--><div class="row ods-box"> <ods-dataset-context context="dfesmap,dfesdata" dfesmap-dataset="dfes-primary-polygons" dfesdata-dataset="dfes-2023-primary-data0"> <div class="col-md-8"> <ods-map style="height:560px" scroll-wheel-zoom="true"> <ods-map-layer context="dfesmap" display="choropleth" refine-on-click-context="dfesdata" refine-on-click
We would like a way for our users to favorite a dataset, as well as provided feed back as to what they liked about it.
I would like to garner some ideas on how y’all manage updating static, flat file datasets on a regular basis.Most of ours pull from the source via API. But a few require us every week/month/year to download a flat file from the source to then re-upload to our platform.At first I thought a simple calendar with what needs doing when would be a good idea. But, some datasets dont drop on a regular schedule or can be delayed. So if I have a calendar entry saying “Download X data today” and its not there. It doesnt get done and I forget that ive not actually done that. So I need some way to check it off.How do you all efficiently manage your flat file updates? What tools or methods do you use?
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.