#1
by
Chris_oTree
(edited )
Hi all, oTree 6.0 beta is available: https://otree.readthedocs.io/en/latest/misc/version_history.html In addition to the improvements from the beta released a week or two ago (https://www.otreehub.com/forum/1399/), there is support for querying AI servers like ChatGPT from your live_method, and many quality-of-life improvements (scroll down to the misc section). Try it out and let me know your feedback! Chris
#2 by ChristianK
Hi Chris, Thank you for the continued work on oTree! I like the new additions to live pages, i.e. giving us the option to run them asynchronously. I also appreciate the work on the wait pages! I did not check in detail yet, but can we access the list of both actively waiting and non-actively waiting players in code? Is there a way to send the inactive players to the end of an app (or the experiment) instead of giving them the option to rejoin the queue? Welcome pages are also a very nice addition! I used to implement my own version of that such that I could open all computers in the lab on the room page, but only have them actively register as participants when participants had sit down and clicked the "I am here" button. Best regards Christian
#3
by
Chris_oTree
Thanks for the feedback Christian! For the wait pages, there isn't any such functionality but thanks for the suggestion, I will be considering this.
#4 by fabsi (edited )
Hi Chris, Thanks for the update. The new async live_methods are a great addition for working with external APIs. It's convenient not to have to implement a separate caching or task-queuing service for long-running calls. I have a clarifying question about their transactional behavior. As I understand it, changes to a player object are only committed to the database after the live_method completes successfully. While the method is running, any modifications are confined to that method's scope. This is safe, expected behavior that should be kept. However, I've run into a use case where this presents a challenge. Consider the following scenario: 1. A participant's input triggers an async live_method that makes a long-running API call. 2. During this call, I want to show a "processing" indicator in the UI. 3. Once the API response arrives, the UI is updated to a "finished" state. The problem occurs if the participant refreshes the page during the "processing" state - which I think is a common reaction to long load times. To handle the refresh, my first thought was to set a field like player.api_state = 'processing' at the start of the live_method. On page load, js_vars could then read this field and restore the "processing" UI. However, because database changes are not committed until the method finishes, js_vars on a reloaded page will always see the pre-call state. Another live_method call (for the same player) to check the status would be queued until the first one completes, so that isn't a solution either. Am I missing an existing way to handle this? If not, it might be helpful for the frontend to have a mechanism to query if an async live_method is currently executing for the player. This would allow the UI to restore its state correctly after a page refresh. Thanks for your work on oTree. Best regards Fabian
#5
by
Chris_oTree
Hi Fabian, async live_method uses a per-participant lock, so it cannot concurrently run twice for the same participant (but it can concurrently run twice for separate participants). if the participant reloads the page, it will wait until the first function call completes, then run the second one. you can set a flag on the participant to indicate it was already run. does that answer you question?
#6 by fabsi
Hi Chris, Thanks for the quick response. That's also what I experienced from testing. From my understanding of the source code, changes to the player or participant—like setting a flag—are only committed to the database when the live_method exits. So, using a flag on the participant is fine for checking if a method has already run. However, I am struggling to indicate to the frontend that a live_method is still running if the user reloads the page. Although live_method calls are queued per participant, functions like js_vars on a page reload are not queued behind the running live_method call and execute immediately. When js_vars runs, it reads from the database, but the flag from the still-running live_method has not been committed yet. This means I can't react in the frontend to show that a call is still in progress. Does this make sense? Thank you.
#7
by
Chris_oTree
maybe you could do it in 2 passes through live_method. the first one sets the flag, the second one actually makes the API call.
#8
by
Chris_oTree
Hi all,
There is a new update to oTree 6 beta with a bunch of improvements to the admin interface. If you already installed the beta, update it again to get the new goodies:
pip install otree --upgrade --pre
Best,
Chris
#9
by
Chris_oTree
Hi, thanks for reporting. Can you send me your code and the full traceback you get?
#10 by fabsi (edited )
Hi Chris,
Thanks for the new preserve unsubmitted form feature. Very helpful.
There is one issue I came across that I think would be helpful to address in the docs or adjust the error message.
The preserveUnsavedInput function's key to identify input elements is built like this:
let key = `input-${_otreePageNumber}-${inp.name}`;
This requires that all input elements have a name attribute.
If an input element does not have a name attribute, the key will be something like "input-1-" if you change the value of that input element, it will be stored in localStorage with that key.
If you then reload the page, the input element will not be found in localStorage, and you get the following error:
Uncaught TypeError: Cannot set properties of undefined (setting 'value')
at preserveUnsavedInput (preserve_unsubmitted_form.js:13:41)
at HTMLDocument.<anonymous> (preserve_unsubmitted_form.js:38:9)
This TypeError causes the input elements after the one without a name attribute to not be restored.
Here is a minimal example when working with raw HTML:
This works fine [input-1-my_name_1, input-1-my_name_2]
<input name="my_name_1" type="text" />
<input name="my_name_2" type="text" />
This does cause an error [input-1-, input-1-my_name_2] and my_name_2 is not restored - if the first input is changed.
<input type="text" />
<input name="my_name_2" type="text" />
If you want to submit the form and save the data, the input elements need a name attribute anyway.
However, the same issue can also occur with not "typical" form elements, like when grouping inputs with fieldset.
Let's consider this example:
<fieldset>
<input name="my_name" type="checkbox">
</fieldset>
When clicking the checkbox, the following keys are created in localStorage: input-1- and input-1-my_name, which means also one should set a name attribute on the fieldset element to avoid the error.
I think it would be very helpful to mention this in the docs that name is used to identify input elements as the error message is not very descriptive if you do not look into the code.
Maybe it is also an idea to catch the error and log a more descriptive message to the console.
Hope that helps.
#11
by
Chris_oTree
thank you very much for the detailed description. I'll see what I can do about it.
#12 by somas
I've noticed a new string in all pages - "Powered by oTree". I have no issue with attribution, but that string includes a link that takes participants out of the experiment, and although it's trivial to hide, you have to go out of your way to do that. Could that link be reconsidered?
#13
by
Chris_oTree
Thanks for the feedback. This type of message with a small link is very common with similar research software platforms. On Qualtrics every page has "Powered by Qualtrics" with a link, and Qualtrics discourages people from modifying or removing the text or the link. I think it's important for users to be able to find out the tool that is being used to store their responses. It's small and faded and realistically I don't think there will be issues with it. I want to give it some time before deciding to change anything about it, I think we will quickly get used to it and it's a small attribution for using oTree and helps all of us.
#14 by somas
Sure, I'm all in for the attribution. The issue I see is that in a lot of labs these kind of experiments are ran in kiosk sessions, so having a link that takes you out of the experiment will leave participants on a state where they can't come back to the experiment on their own. I'd suggest to leave the attribution text (or even adding a more prominent one in the waitroom at the beginning of the experiment like ztree does) and removing the link itself from the otree Pages
#15
by
Chris_oTree
Hi all, there is a new update released over the weekend, it has more features like allowing multiple custom_export functions, improvements to the data export (page and performance), and a bunch of improvements to the session management interface (e.g. monitoring participants, split screen mode, etc...) I recommend to keep checking for more incremental updates.
#16
by
Chris_oTree
Also there is now a REST API for the data export.
#17 by akira
Dear Chris, I have two items I would like you to consider. Ability to adjust the maximum display limit on the Monitor page. For example, when running a study with 1000 or more participants, the Monitor page sometimes appears to freeze. This may be because it is taking a long time to load all the data. If we could adjust the maximum number of items displayed, it would load faster and make it easier to check the progress. What are your thoughts on this? Multi-core support. This might be not easy. In my experience, the current limit for running interactive experiments in oTree (e.g., a public goods game) is about 150-200 participants. I understand this limit is due to single-core processing. The demand may be low, but I would appreciate your considering multi-core support for future development. This would allow us to collect data from 500-1000 people in large-scale experiments without having to set up multiple servers. (For experiments without interaction, I have developed a backend command/script to process them using multi-cores.) Thank you for your consideration.
#18
by
Chris_oTree
Hi Akira, thank you for the suggestions! I will see what I can do to address these issues. Chris
#19
by
Chris_oTree
There is a new update with these features: - live_method is allowed on WaitPage - participant.status field that you can set; it is then used in several places in the admin (e.g. filter out dropouts from the "Session monitor" page).
#20 by somas
When testing live functions with bots, method() returns an asyncgenerator even when not using async live methods, so it's not possible (as far as I know) to assert that the method is working as expected. I'm not sure if it's a regression due to the introduction of async live pages, or I missed something from the documentation