In instance-based learning there are normally no parameters to tune, the system is normally hard coded with priors in form of fixed weights or some algorithms like tree search based algorithms. Such a system normally does what is known as lazy learning by absorbing the training data instances and using those data instances for inference.
For example, in scale invariant feature transform (SIFT), the recognition pipeline is hard coded. All it does during learning is to index descriptors into an indexing data structure such as a kd-tree together with some geometric information. That information is then directly used during inference to recognize instances of objects.
Thus instance-based learning is just about storing the training data instances. Though the training data itself can be preprocessed in many ways and stored in memory.
In model-based learning, we are talking more about reinforcement learning (RL). In model-based RL an agent builds a model of the environment in which it exists so that it can use that model to infer which actions to take in that environment that lead to desired outcomes in order to achieve a given goal.
Model-based learning can also be seen as the opposite of instance-based learning. In model-based learning there are parameters to tune. These parameters with optimal settings are supposed to model the problem as accurately as possible thus learning is not simply about memorization but rather more about searching for those optimal parameters.