Daydream VR application
- or -
Post a project like this2250
$$
- Posted:
- Proposals: 2
- Remote
- #1866175
- Completed
Description
Experience Level: Intermediate
General information for the business: Unity Daydream app for Anroid, drawing Kanji characters in a 3d space
Kind of development: Customization of existing program
Description of requirements/functionality: The main idea of the project is to do Kanji (Japanese character) stroke recognition on a 3d game object looking like a canvas.
The example of this functionality can be found here https://www.youtube.com/watch?v=Om_C9ne81rQ . The controller's raycast pointer (or visible round reticle) that lands on the 3d gameobject will act as a "brush" of sorts (not the controller itself, I know it doesn't have positional tracking). I've found ready-made solutions for this, like https://www.assetstore.unity3d.com/en/#!/content/106126 and https://www.assetstore.unity3d.com/en/#!/content/48975 .
Here is how it should look.
The developer of this tool https://assetstore.unity.com/packages/tools/input-management/advd-glyph-recognition-tool-48975
Said that the drawing is done on a canvas, hence any input from the Daydream controller should be translatable into standard unity input. (reply here: https://forum.unity.com/threads/released-glyph-recognition-tool.369268/)
Confirmation that it can be translated: https://forum.unity.com/threads/transforming-daydream-controller-input-into-a-normalised-2d-space.513100/
And another one at the end of the thread: https://forum.unity.com/threads/unity-vr-daydream-pointer-doesn-t-trigger-onclick-button-event.459967/
If you compare the input methods in the code the Glyph tool's developer gave me https://pastebin.com/dXzv9Pif
and the input methods in this link: https://forum.unity.com/threads/unity-vr-daydream-pointer-doesn-t-trigger-onclick-button-event.459967/
you will see they utilise similar input methods
Here's a rundown of the functionality I would like:
1)User enters the app and sees simple messages describing functionality marked as "1", they all have a timeout.
2)The biggest 3d object "2" is to be drawn on.
3)"4" is a selection menu for Japanese Kanji characters, I don't need more than 6, this is not a fully-fledged app. I will specify which ones exactly to use.
4)"3" helps the user to memorise a Kanji through mnemonics
5)"5" May be a score of sorts for correct strokes, won't be needed if you can integrate the glyph tool in a 3d world
Kanji is drawn in the same way as either here (if you can translate the input) https://www.youtube.com/watch?v=ketcmvi3EN4&t=443s
Or like here: https://www.youtube.com/watch?v=Om_C9ne81rQ if you take on the mockup.
To summarise, input conversion of a unity3d asset is required for a Daydream application.
Extra notes: I'm very open to making a mockup of the desired functionality. I.e. just making it look as if it works in a way similar to the video. One of the ideas is to make use of the canvas gameobject and custom UI https://www.assetstore.unity3d.com/en/#!/content/28601 essentially making every single Japanese Kanji character stroke a button that changes colour on drag/click
Either the mockup or the input conversion should be done within 2 weeks. The main functionality to implement is the drawing and "checking" of the strokes.
Kind of development: Customization of existing program
Description of requirements/functionality: The main idea of the project is to do Kanji (Japanese character) stroke recognition on a 3d game object looking like a canvas.
The example of this functionality can be found here https://www.youtube.com/watch?v=Om_C9ne81rQ . The controller's raycast pointer (or visible round reticle) that lands on the 3d gameobject will act as a "brush" of sorts (not the controller itself, I know it doesn't have positional tracking). I've found ready-made solutions for this, like https://www.assetstore.unity3d.com/en/#!/content/106126 and https://www.assetstore.unity3d.com/en/#!/content/48975 .
Here is how it should look.
The developer of this tool https://assetstore.unity.com/packages/tools/input-management/advd-glyph-recognition-tool-48975
Said that the drawing is done on a canvas, hence any input from the Daydream controller should be translatable into standard unity input. (reply here: https://forum.unity.com/threads/released-glyph-recognition-tool.369268/)
Confirmation that it can be translated: https://forum.unity.com/threads/transforming-daydream-controller-input-into-a-normalised-2d-space.513100/
And another one at the end of the thread: https://forum.unity.com/threads/unity-vr-daydream-pointer-doesn-t-trigger-onclick-button-event.459967/
If you compare the input methods in the code the Glyph tool's developer gave me https://pastebin.com/dXzv9Pif
and the input methods in this link: https://forum.unity.com/threads/unity-vr-daydream-pointer-doesn-t-trigger-onclick-button-event.459967/
you will see they utilise similar input methods
Here's a rundown of the functionality I would like:
1)User enters the app and sees simple messages describing functionality marked as "1", they all have a timeout.
2)The biggest 3d object "2" is to be drawn on.
3)"4" is a selection menu for Japanese Kanji characters, I don't need more than 6, this is not a fully-fledged app. I will specify which ones exactly to use.
4)"3" helps the user to memorise a Kanji through mnemonics
5)"5" May be a score of sorts for correct strokes, won't be needed if you can integrate the glyph tool in a 3d world
Kanji is drawn in the same way as either here (if you can translate the input) https://www.youtube.com/watch?v=ketcmvi3EN4&t=443s
Or like here: https://www.youtube.com/watch?v=Om_C9ne81rQ if you take on the mockup.
To summarise, input conversion of a unity3d asset is required for a Daydream application.
Extra notes: I'm very open to making a mockup of the desired functionality. I.e. just making it look as if it works in a way similar to the video. One of the ideas is to make use of the canvas gameobject and custom UI https://www.assetstore.unity3d.com/en/#!/content/28601 essentially making every single Japanese Kanji character stroke a button that changes colour on drag/click
Either the mockup or the input conversion should be done within 2 weeks. The main functionality to implement is the drawing and "checking" of the strokes.
Nick R.
100% (2)Projects Completed
1
Freelancers worked with
1
Projects awarded
100%
Last project
19 Feb 2018
United Kingdom
New Proposal
Login to your account and send a proposal now to get this project.
Log inClarification Board Ask a Question
-
There are no clarification messages.
We collect cookies to enable the proper functioning and security of our website, and to enhance your experience. By clicking on 'Accept All Cookies', you consent to the use of these cookies. You can change your 'Cookies Settings' at any time. For more information, please read ourCookie Policy
Cookie Settings
Accept All Cookies