Enable multi-factor authentication (MFA) now to strengthen your portal’s security 🔒
Ask questions, get answers, and engage with your peers
Don't know where to start?
Initiatives produced by our community.
Documentation, Academy and other useful resources
APIs documentations, templates, and widgets
We would like to be able to add a custom option for ordering of facets. At this point you can only have the option to order by Alphabetical and order by Number of instances (either ascending or descending). But imagine if you have a set of facets such as SmallMedium Large None of the options make sense, and yet we would want to order it Small, then Medium, then Large
I invited a user and gave it some permissions.The user created some visualisations and worked with data sets.I removed the permissions.I removed the user.The user could not log in at first, but selected “Forgot the password”. Changed the password and could log in.At this point the user can continue to work on visualisations, and create new ones (on opendata). This is not the problem.The problem is this – the user is not visible in the Back Office when selecting “Users & Groups->Users”.Not even when searching for that specific user. For the user, it looks fine. However…The user goes to “Your account->Settings”Changes the personal information, the name.Presses “Save my information”“Your changes have been saved” is shown.However…nothing is saved of these changes. And the user is still not visible in the Back Office. There is a workaround - the admin can invite the user again. But the main issue remains - Admin should be able to see the user in Back Office since all other users a
Hello the ODS community, Here is a new idea: be able to impersonate one user. For example, I am inviting user XXX on the platform, I add him in different groups (for which we have configured some security on datasets and visualisations). I would like to be able to see what the user XXX will see after accepting the invite. I don’t know if it could help anybody else, but for us it would be very nice to help “debug” access to some datasets/visualisationsThanks!
Improve filtering in lineage legend
If a reuse is based on multiple datasets (and it’s the main case) you have to declare the reuse for each concerned dataset, which is tedious and does not add value to the cross-referencing of dataAuto-translation 🪄Si une réutilisation est basée sur plusieurs jeux de données (et c’est le cas principal) il faut déclarer la réutilisation pour chaque jeu de données concerné, ce qui est fastidieux et n’ajoute pas de valeur au croisement des données
Improve how reuses are showcased in a portal
No more endless scrolling to move a field when managing your dataset schema! With the new interfaces and a dedicated tab, everything becomes simpler and faster. Schema define the structure of your datasets and we’ve completely redesigned how you manage them for a smoother and more intuitive experience.Discover the updates included in this release. 👇 A new tab dedicated to schema managementUntil now, schema management was accessed via the "Processing" tab of your back office. Moving forward, it has its own dedicated space in the new "Schema" tab. An optimized visualization and editing experienceIntuitive drag-and-drop management to instantly organize your fields. Seamless navigation, perfect for effortlessly browsing through columns. Simplified editing through a separate side panel. Faster and more flexible schema management performanceWith schema management now in a dedicated tab, schema changes are now applied instantly, independent of the processing preview's calculation t
Sometimes when datasets are loaded there are records which are de duplicated when there is an exact match.. This is great however, we have no idea which record/s get deduplicated and would like to make sure we are fixing data at the source. On creating a dataset/loading a source can we get a notification of the records that are de-duplicated?
Hello everyone,I’m Kamal Hinduja from Geneva, Switzerland. I’m new to this community and look forward to contributing positively to the discussions while learning from your insights. Could someone please explain how Open data soft integrates with public datasets? Thanks in Advance!Kamal Hinduja Geneva, Switzerland
Hello dear community,I’m having an issue with sorting in the filter. I created a dropdown filter with the names of care homes. The problem is that the alphabetical order doesn’t prioritize uppercase and lowercase letters correctly. I have care homes named "dandelion" and "irides" which are lowercase and appear at the very end of the filter. Is there a way to work around this? <div class="shared-width-ods-selects"> <div ods-facet-results="reglist" ods-facet-results-facet-name="name" ods-facet-results-context="myctx0" ods-facet-results-sort="alphanum"> <ods-select ng-init="myctx0.parameters['refine.name'] = []" options="reglist" selected-values="myctx0.parameters['refine.name']" multiple="false" label-modifie
As I’m trying to create a page with the code editor, I’m facing an issue with the refinement of datasets.To automate a line chart for different contexts, so that I can select one dataset of a list of many datasets and use the same ods-chart tag, I want to refine all the contexts before referencing on them in the chart widget. But the important aspect of my question is this: For this refinement, it would be more convenient for me to define the values that should be excluded, instead of the ones that should be included.Is there a possibility to do so?Thank you all in advance! :)
I have two datasets, one with polygons and names and one with data.I load the map in the LHS of the screen and use the refine-on-click feature within <ods-map-layer to load a HTML table on the RHS of the screen.However, I’d like to be able to have the map show the selection.I’ve tried using highlight-on-refine="true" but I suspect this doesn’t work because the data on the map is not actually being refreshed!Here is my code:!--scriptorendfragment--><div class="row ods-box"> <ods-dataset-context context="dfesmap,dfesdata" dfesmap-dataset="dfes-primary-polygons" dfesdata-dataset="dfes-2023-primary-data0"> <div class="col-md-8"> <ods-map style="height:560px" scroll-wheel-zoom="true"> <ods-map-layer context="dfesmap" display="choropleth" refine-on-click-context="dfesdata" refine-on-click
We would like a way for our users to favorite a dataset, as well as provided feed back as to what they liked about it.
I would like to garner some ideas on how y’all manage updating static, flat file datasets on a regular basis.Most of ours pull from the source via API. But a few require us every week/month/year to download a flat file from the source to then re-upload to our platform.At first I thought a simple calendar with what needs doing when would be a good idea. But, some datasets dont drop on a regular schedule or can be delayed. So if I have a calendar entry saying “Download X data today” and its not there. It doesnt get done and I forget that ive not actually done that. So I need some way to check it off.How do you all efficiently manage your flat file updates? What tools or methods do you use?
Hi,I understand that geopoint schemas cant be edited when they are the centre of a geoshape, but is there anyway to still be able to edit the description? Mainly to state that this point is in fact the centre of the shape and not a location point (see below) Thanks Ryan
Hello,Using the dataset named “ods-api-monitoring”, I am able to know the number of download for a specific dataset. However, I would like to perform this task every month, that’s why I want to automate this process using python. I tried to filter the dataset and take the link of the csv generated in order to upload it on python. Nevertheless, I have an error while launching the program, it seems that it has a link with the authentification on the plateform. Do we have access to an API for the dataset “ods-api-monitoring” ? Or is there another recommended way to retrieve this information?Thank you for your help.Best regards,Eva Berry
I’m trying to use some Openstreetmap data in ODS.However some of the data is formed where there are multiple lat/lon for an ID and when bringing it into ODS, ODS takes none of the lat/lon pairs when there are multiple. See screenshot of the multiple lat/lon pairs The records with multiple happen to be at the bottom as I think (assume) ODS uses the first 20 rows to determine a pattern. I want ODS to at least take the first pair of lat/lon so the data is somewhat accurate (at the moment it just ignores this and leaves the lat and lon fields blank. Query used for bringing in the data is here:https://overpass-api.de/api/interpreter?data=%2F*%0AThis%20query%20looks%20for%20nodes%2C%20ways%20and%20relations%20%0Awith%20the%20given%20key%2Fvalue%20combination.%0AChoose%20your%20region%20and%20hit%20the%20Run%20button%20above%21%0A*%2F%0A%5Bout%3Ajson%5D%5Btimeout%3A25%5D%3B%0A%2F%2F%20gather%20results%0Anwr%5B%22highway%22%3D%22elevator%22%5D%28-27.870644599673355%2C152.0219421386719%2C-26.
Dear all,it appears that text search for word components doesn’t include results where the word component starts in the middle of a word element.Can you confirm this search result property and is there a possible work-around or a future feature that allows full-text search within words?ExampleText search “kirsch” finds entries with the attribute “Trauben-Kirsche” but not those with “Traubenkirsche”. URLshttps://data.bs.ch/explore/dataset/100052/table/?q=kirsch&sort=arthttps://data.bs.ch/explore/dataset/100052/table/?sort=art&refine.baumart_lateinisch=Prunus+padushttps://data.bs.ch/explore/dataset/100052/table/?sort=art&refine.baumart_lateinisch=Prunus+padus&q=kirsch
Hello Opendatasoft Community,I've been exploring the capabilities of Opendatasoft for creating dynamic dashboards and interactive data visualizations. The platform's flexibility in handling diverse datasets and its user-friendly interface have been impressive. However, as I delve deeper into more complex data representations, I'm curious about best practices to ensure optimal performance and responsiveness.Specifically, I'm interested in strategies for: Efficiently managing large datasets to prevent lag in visualizations. Implementing real-time data updates without compromising dashboard speed. Utilizing Opendatasoft's API features to enhance data interactivity. Given the platform's robust features , I'm confident there are effective methods to achieve these goals.On a related note, I've been considering hardware upgrades to support more intensive data processing tasks. Would investing in an i5 gaming laptop provide the necessary performance boost for handling complex data visualiz
Need to discuss a more specific issue? Contact our support team and we’ll be happy to help you get up and running!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.