Hey Juan,
I currently have 4 models to determine the user’s current context based on smartphone sensor data (location, motion, compass etc.). However, these are general Mamdani based models which work in most cases but not everything obviously. Also, if there is an error in detecting a user’s context, there is no feedback loop in place.
What I want to do is:
– Show the user the current context through an image
– If the user decides it is incorrect, they can reassign it to any other context by selecting an image. Or they can also enter their own context
– I want this feedback loop to then adjust the model rules and membership function boundaries
I am currently running these inference models on my iOS app. What are your thoughts/suggestions on how I could achieve this? Also, any good tutorials on Fuzzy Takagi-Sugeno models that you can recommend?
Look forward to hearing from you.
Cheers,
Abhishek